modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
shrimantasatpati/english_to_hindi_legal_FT_gemma3_4b_it | shrimantasatpati | 2025-05-21T18:24:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T18:23:56Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shrimantasatpati
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Swissmountain/model-llama-3.2 | Swissmountain | 2025-05-21T18:20:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T18:20:16Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Swissmountain
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
debbieliang/idefics2-8b-dpo | debbieliang | 2025-05-21T18:20:34Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:unsloth/Llama-3.2-11B-Vision-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T18:19:51Z | ---
base_model: unsloth/Llama-3.2-11B-Vision-Instruct
library_name: transformers
model_name: idefics2-8b-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for idefics2-8b-dpo
This model is a fine-tuned version of [unsloth/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="debbieliang/idefics2-8b-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/debbieliang/huggingface/runs/0q68o8qu)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
leslie-aguirre-instagram-quien-es-y-video/Ver.Video.leslie.aguirre.instagram.quien.es.y.video.de.la.aficionada.del.america | leslie-aguirre-instagram-quien-es-y-video | 2025-05-21T18:20:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-21T18:19:40Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
TheGardener/KD-Embedding-and-MLP-Llama-0.7B-epoch-2nd | TheGardener | 2025-05-21T18:19:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T18:18:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lyng148/bartpho-vietnews-sum | lyng148 | 2025-05-21T18:18:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:vinai/bartpho-word-base",
"base_model:finetune:vinai/bartpho-word-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2025-05-21T18:17:43Z | ---
library_name: transformers
license: mit
base_model: vinai/bartpho-word-base
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bartpho-vietnews-sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bartpho-vietnews-sum
This model is a fine-tuned version of [vinai/bartpho-word-base](https://huggingface.co/vinai/bartpho-word-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3348
- Rouge1: 0.5983
- Rouge2: 0.2976
- Rougel: 0.4114
- Rougelsum: 0.4114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.6491 | 0.3828 | 2000 | 3.5148 | 0.5802 | 0.2748 | 0.3923 | 0.3922 |
| 3.3865 | 0.7656 | 4000 | 3.4149 | 0.5888 | 0.2845 | 0.4010 | 0.4009 |
| 3.0592 | 1.1483 | 6000 | 3.3750 | 0.5923 | 0.2911 | 0.4055 | 0.4054 |
| 2.9393 | 1.5311 | 8000 | 3.3433 | 0.5915 | 0.2912 | 0.4067 | 0.4066 |
| 2.8404 | 1.9139 | 10000 | 3.3122 | 0.5952 | 0.2938 | 0.4082 | 0.4080 |
| 2.6108 | 2.2967 | 12000 | 3.3402 | 0.5964 | 0.2948 | 0.4086 | 0.4085 |
| 2.5911 | 2.6795 | 14000 | 3.3348 | 0.5983 | 0.2976 | 0.4114 | 0.4114 |
### Framework versions
- Transformers 4.52.1
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kokovova/d748cb41-997e-4c1f-bb9a-86fb051552d4 | kokovova | 2025-05-21T18:16:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-21T18:05:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d748cb41-997e-4c1f-bb9a-86fb051552d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 66099716c90ed966_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: Complex_CoT
field_instruction: Question
field_output: Response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: kokovova/d748cb41-997e-4c1f-bb9a-86fb051552d4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/66099716c90ed966_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bfb3e29-461b-417c-9436-3ae19614ca7d
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 1bfb3e29-461b-417c-9436-3ae19614ca7d
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# d748cb41-997e-4c1f-bb9a-86fb051552d4
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.677 | 0.1688 | 250 | 0.7536 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vermoney/afe99449-c7e3-4f1c-8ce9-79ed47f5c4f0 | vermoney | 2025-05-21T18:14:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-21T18:03:26Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: afe99449-c7e3-4f1c-8ce9-79ed47f5c4f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 66099716c90ed966_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: Complex_CoT
field_instruction: Question
field_output: Response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/afe99449-c7e3-4f1c-8ce9-79ed47f5c4f0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/66099716c90ed966_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bfb3e29-461b-417c-9436-3ae19614ca7d
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 1bfb3e29-461b-417c-9436-3ae19614ca7d
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# afe99449-c7e3-4f1c-8ce9-79ed47f5c4f0
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1511 | 0.1891 | 280 | 0.7468 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Samin2010/DepNik | Samin2010 | 2025-05-21T18:13:33Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-21T18:13:33Z | ---
license: bigscience-openrail-m
---
|
alexkahng/fin-llm-lora | alexkahng | 2025-05-21T18:12:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T18:12:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/omourier-gr00t-Lego_rouge-zwz14 | phospho-app | 2025-05-21T18:10:07Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-21T17:50:11Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [omourier/Lego_rouge](https://huggingface.co/datasets/omourier/Lego_rouge)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 27
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
shrimantasatpati/english_to_hindi_FT_gemma3_4b_it | shrimantasatpati | 2025-05-21T18:10:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T18:09:50Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shrimantasatpati
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DanielNRU/pollen-re | DanielNRU | 2025-05-21T18:09:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T12:25:16Z | ---
library_name: transformers
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: pollen-re-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-re-model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5091
- F1: 0.9291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 422 | 0.4688 | 0.8965 |
| 0.572 | 2.0 | 844 | 0.5048 | 0.8790 |
| 0.348 | 3.0 | 1266 | 0.4542 | 0.9217 |
| 0.2617 | 4.0 | 1688 | 0.5091 | 0.9291 |
| 0.1491 | 5.0 | 2110 | 0.5265 | 0.9291 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1
|
asdxccxsd/c16c3b45-f564-413d-a285-b70085001c8e | asdxccxsd | 2025-05-21T18:06:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T18:00:58Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c16c3b45-f564-413d-a285-b70085001c8e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 66099716c90ed966_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: Complex_CoT
field_instruction: Question
field_output: Response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: asdxccxsd/c16c3b45-f564-413d-a285-b70085001c8e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora:
alpha: 16
dropout: 0.05
r: 64
target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/66099716c90ed966_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trainer:
gradient_accumulation_steps: 4
learning_rate: 1.4239781646059219e-05
lr_scheduler: cosine
warmup_ratio: 0.03
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bfb3e29-461b-417c-9436-3ae19614ca7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1bfb3e29-461b-417c-9436-3ae19614ca7d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c16c3b45-f564-413d-a285-b70085001c8e
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.6455 | 0.0005 | 1 | 1.0507 |
| 4.2003 | 0.0014 | 3 | 1.0395 |
| 3.9983 | 0.0027 | 6 | 0.9480 |
| 3.3321 | 0.0041 | 9 | 0.9065 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phospho-app/tictactoe-A1-orange-6415-6f595hsr0j | phospho-app | 2025-05-21T18:00:56Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-21T16:19:54Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
- **Wandb run URL**: None
- **Epochs**: 15
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
valste/orientation_classifier_224x224_aug_head1_mobnet.keras | valste | 2025-05-21T18:00:20Z | 0 | 0 | keras | [
"keras",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T17:56:35Z | ---
license: apache-2.0
---
|
OL-OL/llama3-vision-finetune | OL-OL | 2025-05-21T17:59:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-13T15:38:56Z | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: transformers
model_name: llama3-vision-finetune
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama3-vision-finetune
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="OL-OL/llama3-vision-finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.2
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Gustav098/biomistral_finetuned_medchat2-Q4_K_M-GGUF | Gustav098 | 2025-05-21T17:57:40Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Gustav098/biomistral_finetuned_medchat2",
"base_model:quantized:Gustav098/biomistral_finetuned_medchat2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T17:57:21Z | ---
base_model: Gustav098/biomistral_finetuned_medchat2
tags:
- llama-cpp
- gguf-my-repo
---
# Gustav098/biomistral_finetuned_medchat2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Gustav098/biomistral_finetuned_medchat2`](https://huggingface.co/Gustav098/biomistral_finetuned_medchat2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Gustav098/biomistral_finetuned_medchat2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Gustav098/biomistral_finetuned_medchat2-Q4_K_M-GGUF --hf-file biomistral_finetuned_medchat2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Gustav098/biomistral_finetuned_medchat2-Q4_K_M-GGUF --hf-file biomistral_finetuned_medchat2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Gustav098/biomistral_finetuned_medchat2-Q4_K_M-GGUF --hf-file biomistral_finetuned_medchat2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Gustav098/biomistral_finetuned_medchat2-Q4_K_M-GGUF --hf-file biomistral_finetuned_medchat2-q4_k_m.gguf -c 2048
```
|
jordialters/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-meek_shiny_platypus | jordialters | 2025-05-21T17:55:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am meek shiny platypus",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T06:12:14Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-meek_shiny_platypus
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am meek shiny platypus
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-meek_shiny_platypus
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jordialters/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-meek_shiny_platypus", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Dodhy/CartPole-v1 | Dodhy | 2025-05-21T17:51:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-21T17:51:10Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 484.80 +/- 45.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
perchinkov/fuyu-skimmer-lora | perchinkov | 2025-05-21T17:45:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T17:45:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DanielNRU/pollen-ner-1950 | DanielNRU | 2025-05-21T17:41:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
] | null | 2025-05-21T04:55:08Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-1950
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-1950
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1418
- Precision: 0.8904
- Recall: 0.9297
- F1: 0.9096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 244 | 0.1403 | 0.8831 | 0.9257 | 0.9039 |
| No log | 2.0 | 488 | 0.1444 | 0.8838 | 0.9317 | 0.9071 |
| 0.1561 | 3.0 | 732 | 0.1403 | 0.8834 | 0.9277 | 0.9050 |
| 0.1561 | 4.0 | 976 | 0.1418 | 0.8904 | 0.9297 | 0.9096 |
| 0.1523 | 5.0 | 1220 | 0.1430 | 0.88 | 0.9277 | 0.9032 |
| 0.1523 | 6.0 | 1464 | 0.1391 | 0.8868 | 0.9277 | 0.9068 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
phospho-app/tictactoe-A1-orange-6412-q418i4jfxl | phospho-app | 2025-05-21T17:41:27Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-21T16:19:33Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
- **Wandb run URL**: None
- **Epochs**: 12
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
mnm3/gemma-3-1b-it-mnm3_colab_local_2b_v2-qlora-adapters | mnm3 | 2025-05-21T17:39:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"region:us"
] | null | 2025-05-21T17:39:47Z | ---
base_model: google/gemma-3-1b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mzizo4110/BART-SUMMARIZATION-5 | mzizo4110 | 2025-05-21T17:38:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:adapter:facebook/bart-large-cnn",
"license:mit",
"region:us"
] | null | 2025-05-21T08:14:51Z | ---
library_name: peft
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: BART-SUMMARIZATION-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-SUMMARIZATION-5
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4845 | 0.1470 | 250 | 1.3956 |
| 1.4723 | 0.2940 | 500 | 1.3955 |
| 1.4682 | 0.4410 | 750 | 1.3962 |
| 1.465 | 0.5881 | 1000 | 1.3943 |
| 1.4857 | 0.7351 | 1250 | 1.3890 |
| 1.4846 | 0.8821 | 1500 | 1.3866 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
mhowl/braindrive-concierge | mhowl | 2025-05-21T17:35:19Z | 0 | 0 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-20T16:37:36Z | ---
license: mit
---
### Download the model
```
(bd) matthew@flopt:~/dev$ huggingface-cli download mhowl/braindrive-concierge
Fetching 4 files: 0%| | 0/4 [00:00<?, ?it/s]Downloading 'README.md' to '/home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/7be5fc7f47d5db027d120b8024982df93db95b74.incomplete'
Downloading 'Modelfile' to '/home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/947ee17a10626fd574f9ba79f9ffa206caefb623.incomplete'
Downloading '.gitattributes' to '/home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/c7d4d058de314de6b38caad71376f8f5ae11fb48.incomplete'
Downloading 'braindrive-qwen3-1.7B.gguf' to '/home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/22b9ae9c19ae6e69f415bbfe602b96b767044e414c25c21923355dd12f7d3bee.incomplete'
.gitattributes: 100%|█████████████████████████████████████| 1.58k/1.58k [00:00<00:00, 4.37MB/s]
Download complete. Moving file to /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/c7d4d058de314de6b38caad71376f8f5ae11fb48
README.md: 100%|████████████████████████████████████████████| 24.0/24.0 [00:00<00:00, 68.3kB/s]
Download complete. Moving file to /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/7be5fc7f47d5db027d120b8024982df93db95b74
Modelfile: 100%|██████████████████████████████████████████████| 696/696 [00:00<00:00, 1.28MB/s]
Download complete. Moving file to /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/947ee17a10626fd574f9ba79f9ffa206caefb623
braindrive-qwen3-1.7B.gguf: 100%|█████████████████████████| 3.45G/3.45G [03:16<00:00, 17.6MB/s]
Download complete. Moving file to /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/blobs/22b9ae9c19ae6e69f415bbfe602b96b767044e414c25c21923355dd12f7d3bee
Fetching 4 files: 100%|██████████████████████████████████████████| 4/4 [03:16<00:00, 49.22s/it]
/home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/snapshots/ce6f158437262b39a76186b50e175993cb6a54c3
```
### Update Modelfile
Use install location from download step above. <br/>
`(bd) matthew@flopt:~/dev$ nano /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/snapshots/ce6f158437262b39a76186b50e175993cb6a54c3/Modelfile`
Update `FROM` gguf file location.
```
FROM /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/snapshots/ce6f158437262b39a76186b50e175993cb6a54c3/braindrive-qwen3-1.7B.gguf
PARAMETER num_ctx 2048
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER top_k 20
SYSTEM You are the BrainDrive Concierge, an expert assistant dedicated to supporting users of the BrainDrive open-source AI platform. Only answer questions rela>
TEMPLATE """
{{- range .Messages }}
{{- if eq .Role "system" }}
<|im_start|>system
{{ .Content }}
<|im_end|>
{{- else if eq .Role "user" }}
<|im_start|>user
{{ .Content }}
<|im_end|>
{{- else if eq .Role "assistant" }}
<|im_start|>assistant
{{ .Content }}
<|im_end|>
{{- end }}
```
### Check Ollama version
```
(bd) matthew@flopt:~/dev$ ollama --version
ollama version is 0.7.0
```
### Create the model
```
(bd) matthew@flopt:~/dev$ ollama create braindrive-from-hf -f /home/matthew/.cache/huggingface/hub/models--mhowl--braindrive-concierge/snapshots/ce6f158437262b39a76186b50e175993cb6a54c3/Modelfile
gathering model components
copying file sha256:22b9ae9c19ae6e69f415bbfe602b96b767044e414c25c21923355dd12f7d3bee 100%
parsing GGUF
using existing layer sha256:22b9ae9c19ae6e69f415bbfe602b96b767044e414c25c21923355dd12f7d3bee
using existing layer sha256:7467fac2b61c013cad3093dcf159292144d21225ec9769b4219a4c50b6bbc895
using existing layer sha256:02adc710ce97d252a69e4ae766b8509ba4f5d05505ce281872df8249827e1715
using existing layer sha256:dea8e8086ce2bbda78fa927fa35c7651f19b8e268b5d0c9addfc5d89fc3b1082
writing manifest
success
```
### List the models
```
(bd) matthew@flopt:~/dev$ ollama list
NAME ID SIZE MODIFIED
braindrive-from-hf:latest 7767c02fe2c2 3.4 GB 3 seconds ago
braindrive-concierge:latest 7767c02fe2c2 3.4 GB 24 hours ago
qwen3:1.7b 458ce03a2187 1.4 GB 3 days ago
gemma3:4b a2af6cc3eb7f 3.3 GB 4 days ago
deepseek-r1:latest 0a8c26691023 4.7 GB 5 days ago
gemma3:1b 8648f39daa8f 815 MB 5 days ago
```
### Run the model
```
(bd) matthew@flopt:~/dev$ ollama run braindrive-from-hf:latest
>>> who are you?
<think>
</think>
I am BrainDrive Concierge, a helpful assistant dedicated to providing support and
information for users of the BrainDrive open-source AI platform.
>>> what is braindrive?
<think>
</think>
BrainDrive is an open-source AI platform that empowers individuals to build,
customize, and own their AI systems without restrictions or fees. It focuses on
simplicity, accessibility, and control over one's technology.
>>> what are plugins?
<think>
Plugins are add-on features or services you can install in BrainDrive to extend its
capabilities. They allow users to personalize their experience by adding specialized
tools for tasks like creating content, managing data, or interacting with other
systems seamlessly and securely.
>>> what is studio?
<think>
Studio is a feature within BrainDrive designed for creative and technical
customization of AI models. It enables users to build new models from scratch, modify
existing ones, and tailor them to their specific needs without relying on pre-built
templates or external systems. This flexibility allows owners full control over how
they use and develop their AI system.
</think>
>>> /bye
```
|
TOTORONG/qwen3_fine_tensor-Q5_K_S-GGUF | TOTORONG | 2025-05-21T17:34:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:TOTORONG/qwen3_fine_tensor",
"base_model:quantized:TOTORONG/qwen3_fine_tensor",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T17:32:48Z | ---
base_model: TOTORONG/qwen3_fine_tensor
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# TOTORONG/qwen3_fine_tensor-Q5_K_S-GGUF
This model was converted to GGUF format from [`TOTORONG/qwen3_fine_tensor`](https://huggingface.co/TOTORONG/qwen3_fine_tensor) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TOTORONG/qwen3_fine_tensor) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TOTORONG/qwen3_fine_tensor-Q5_K_S-GGUF --hf-file qwen3_fine_tensor-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TOTORONG/qwen3_fine_tensor-Q5_K_S-GGUF --hf-file qwen3_fine_tensor-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TOTORONG/qwen3_fine_tensor-Q5_K_S-GGUF --hf-file qwen3_fine_tensor-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TOTORONG/qwen3_fine_tensor-Q5_K_S-GGUF --hf-file qwen3_fine_tensor-q5_k_s.gguf -c 2048
```
|
DanielNRU/pollen-ner-1900 | DanielNRU | 2025-05-21T17:28:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
] | null | 2025-05-21T04:20:08Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-1900
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-1900
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1420
- Precision: 0.8868
- Recall: 0.9277
- F1: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 238 | 0.1407 | 0.8831 | 0.9257 | 0.9039 |
| No log | 2.0 | 476 | 0.1420 | 0.8868 | 0.9277 | 0.9068 |
| 0.1596 | 3.0 | 714 | 0.1452 | 0.8851 | 0.9277 | 0.9059 |
| 0.1596 | 4.0 | 952 | 0.1438 | 0.8836 | 0.9297 | 0.9061 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Daniel-Tan-ML/Qwen2.5-32B-Instruct_bad-code-twm-repro | Daniel-Tan-ML | 2025-05-21T17:27:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:52:46Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Daniel-Tan-ML
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/tictactoe-A1-orange-6410-hodkr5qvr5 | phospho-app | 2025-05-21T17:27:23Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-21T16:14:32Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
aisi-whitebox/llama-3.1-8b-instruct-finetuned-mo1xc-checkpoint-115-checkpoint-161-checkpoint-138 | aisi-whitebox | 2025-05-21T17:27:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T17:27:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EduardNov123/nanoVLM | EduardNov123 | 2025-05-21T17:23:42Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-21T17:23:09Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("EduardNov123/nanoVLM")
```
|
mohhtl/0ee93685-f521-4e44-b176-74413d6fc54f | mohhtl | 2025-05-21T17:20:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:train.json",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T16:58:54Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- generated_from_trainer
datasets:
- train.json
model-index:
- name: 0ee93685-f521-4e44-b176-74413d6fc54f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
dataset_prepared_path: b9f13347-e7e7-4d6c-9285-a9b0b48c2d69_last_run_prepared
datasets:
- path: train.json
type:
field: null
field_input: input
field_instruction: system_prompt
field_output: reference_answer
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
flash_attention: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
logging_steps: 1
lora_alpha: 8
lora_dropout: 0.05
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 7
optimizer: adamw_bnb_8bit
output_dir: 0ee93685-f521-4e44-b176-74413d6fc54f
pad_to_sequence_len: null
resume_from_checkpoint: null
sample_packing: false
save_epochs: 1
save_strategy: 'no'
save_total_limit: 1
saves_per_epoch: 1
sequence_len: 2048
special_tokens: null
tf32: false
tokenizer_type: AutoTokenizer
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_log_model: null
wandb_name: null
wandb_project: null
wandb_watch: null
warmup_ratio: 0.0
warmup_steps: 0
weight_decay: 0.0
```
</details><br>
# 0ee93685-f521-4e44-b176-74413d6fc54f
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the train.json dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 7.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
FormlessAI/d84419b5-2185-433b-9661-35c823d5d3a5 | FormlessAI | 2025-05-21T17:19:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2-0.5B",
"base_model:finetune:Qwen/Qwen2-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:40:30Z | ---
base_model: Qwen/Qwen2-0.5B
library_name: transformers
model_name: d84419b5-2185-433b-9661-35c823d5d3a5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for d84419b5-2185-433b-9661-35c823d5d3a5
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/d84419b5-2185-433b-9661-35c823d5d3a5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/tgad7cnx)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu118
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
matatonic/Devstral-Small-2505-6.5bpw-h8-exl2 | matatonic | 2025-05-21T17:18:31Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"text2text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"exl2",
"region:us"
] | text2text-generation | 2025-05-21T17:17:24Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Devstral-Small-2505
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text2text-generation
---
# Model Card for mistralai/Devstrall-Small-2505
Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed.
For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral).
## Key Features:
- **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
- **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
- **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window**: A 128k context window.
- **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
### SWE-Bench
Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%.
| Model | Scaffold | SWE-Bench Verified (%) |
|------------------|--------------------|------------------------|
| Devstral | OpenHands Scaffold | **46.8** |
| GPT-4.1-mini | OpenAI Scaffold | 23.6 |
| Claude 3.5 Haiku | Anthropic Scaffold | 40.6 |
| SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 |
When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.

## Usage
We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold.
You can use it either through our API or by running locally.
### API
Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key.
Then run these commands to start the OpenHands docker container.
```bash
export MISTRAL_API_KEY=<MY_KEY>
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik
mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.39
```
### Local inference
You can also run the model locally. It can be done with LMStudio or other providers listed below.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
The server will start at http://0.0.0.0:3000. Open it in your browser and you will see a tab AI Provider Configuration.
Now you can start a new conversation with the agent by clicking on the plus sign on the left bar.
The model can also be deployed with the following libraries:
- [`LMStudio (recommended for quantized model)`](https://lmstudio.ai/): See [here](#lmstudio-recommended-for-quantized-model)
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
### OpenHands (recommended)
#### Launch a server to deploy Devstral-Small-2505
Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`.
In the case of the tutorial we spineed up a vLLM server running the command:
```bash
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
The server address should be in the following format: `http://<your-server-url>:8000/v1`
#### Launch OpenHands
You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation).
The easiest way to launch OpenHands is to use the Docker image:
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Then, you can access the OpenHands UI at `http://localhost:3000`.
#### Connect to the server
When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.
Fill the following fields:
- **Custom Model**: `openai/mistralai/Devstral-Small-2505`
- **Base URL**: `http://<your-server-url>:8000/v1`
- **API Key**: `token` (or any other token you used to launch the server if any)
#### Use OpenHands powered by Devstral
Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app.
<details>
<summary>To-Do list app</summary
1. Let's ask Devstral to generate the app with the following prompt:
```txt
Build a To-Do list app with the following requirements:
- Built using FastAPI and React.
- Make it a one page app that:
- Allows to add a task.
- Allows to delete a task.
- Allows to mark a task as done.
- Displays the list of tasks.
- Store the tasks in a SQLite database.
```

2. Let's see the result
You should see the agent construct the app and be able to explore the code it generated.
If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app.


3. Iterate
Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status.
Enjoy building with Devstral Small and OpenHands!
</details>
### LMStudio (recommended for quantized model)
Download the weights from huggingface:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2505_gguf" \
--include "devstralQ4_K_M.gguf" \
--local-dir "mistralai/Devstral-Small-2505_gguf/"
```
You can serve the model locally with [LMStudio](https://lmstudio.ai/).
* Download [LM Studio](https://lmstudio.ai/) and install it
* Install `lms cli ~/.lmstudio/bin/lms bootstrap`
* In a bash terminal, run `lms import devstralQ4_K_M.ggu` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
* Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting oggle Serve on Local Network to be on.
* On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Click “see advanced setting” on the second line.
In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Devstral in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Devstral-Small-2505"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "<your-command>",
},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
```
### Mistral-inference
We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.
#### Install
Make sure to have mistral_inference >= 1.6.0 installed.
```bash
pip install mistral_inference --upgrade
```
#### Download
```python
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Python
You can run the model using the following command:
```bash
mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
```
You can then prompt it with anything you'd like.
### Ollama
You can run Devstral using the [Ollama](https://ollama.ai/) CLI.
```bash
ollama run devstral
```
### Transformers
To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
```python
import torch
from mistral_common.protocol.instruct.messages import (
SystemMessage, UserMessage
)
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
from huggingface_hub import hf_hub_download
from transformers import AutoModelForCausalLM
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Devstral-Small-2505"
tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json")
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_file(tekken_file)
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenized = tokenizer.encode_chat_completion(
ChatCompletionRequest(
messages=[
SystemMessage(content=SYSTEM_PROMPT),
UserMessage(content="<your-command>"),
],
)
)
output = model.generate(
input_ids=torch.tensor([tokenized.tokens]),
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)
``` |
phospho-app/tictactoe-A1-orange-6408-ny7xozpur7 | phospho-app | 2025-05-21T17:15:59Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-21T16:20:05Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
- **Wandb run URL**: None
- **Epochs**: 8
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
MinaMila/phi3_unlearned_ug_e-5_1.0_0.15_0.05_LoRa_Adult_ep1_22 | MinaMila | 2025-05-21T17:15:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T17:15:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vapit/shawgpt-ft | vapit | 2025-05-21T17:15:31Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T17:15:26Z | ---
library_name: peft
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
tags:
- generated_from_trainer
model-index:
- name: shawgpt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.2143 | 1.0 | 4 | 3.7811 |
| 3.4665 | 2.0 | 8 | 3.1367 |
| 2.9209 | 3.0 | 12 | 2.6583 |
| 2.4739 | 4.0 | 16 | 2.3120 |
| 2.2214 | 5.0 | 20 | 2.0221 |
| 1.807 | 6.0 | 24 | 1.8424 |
| 1.6541 | 7.0 | 28 | 1.7631 |
| 1.7071 | 7.6154 | 30 | 1.7495 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
haihp02/1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter | haihp02 | 2025-05-21T17:14:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T16:04:11Z | ---
base_model: unsloth/Phi-3.5-mini-instruct
library_name: transformers
model_name: 1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haihp02/1ad156f8-03d0-44ab-9d9a-7dd51b303128-phase1-adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/sn56-sft-before-dpo-train/runs/ndb077vl)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kaiyuyue/zero-checkpoints | kaiyuyue | 2025-05-21T17:12:36Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama-3",
"zero-shot-vision-encoder-grafting",
"text-generation",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-70B-Instruct",
"license:cc",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-24T22:08:08Z | ---
license: cc
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.2-3B-Instruct
- meta-llama/Llama-3.1-8B-Instruct
- meta-llama/Llama-3.1-70B-Instruct
- openai/clip-vit-large-patch14
library_name: transformers
tags:
- pytorch
- llama-3
- zero-shot-vision-encoder-grafting
---
## Zero-Shot Vision Encoder Grafting via LLM Surrogates
This repo provides the checkpoints of the project - [facebookresearch/zero](https://github.com/facebookresearch/zero) under the CC-BY-NC 4.0 license. |
DanielNRU/pollen-ner-1800 | DanielNRU | 2025-05-21T17:11:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
] | null | 2025-05-21T02:37:33Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-1800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-1800
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1431
- Precision: 0.8783
- Recall: 0.9277
- F1: 0.9023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 225 | 0.1445 | 0.8760 | 0.9217 | 0.8982 |
| No log | 2.0 | 450 | 0.1492 | 0.8658 | 0.9197 | 0.8919 |
| 0.1668 | 3.0 | 675 | 0.1431 | 0.8783 | 0.9277 | 0.9023 |
| 0.1668 | 4.0 | 900 | 0.1431 | 0.8793 | 0.9217 | 0.9 |
| 0.1612 | 5.0 | 1125 | 0.1403 | 0.8827 | 0.9217 | 0.9018 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
darwinha/FineLlama-3.1-8B-tome200 | darwinha | 2025-05-21T17:11:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T17:05:01Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** darwinha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jwl2006/Reinforce-004 | jwl2006 | 2025-05-21T17:09:00Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-21T17:08:50Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-004
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
masmatix/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF | masmatix | 2025-05-21T17:06:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-21T17:06:27Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
---
# masmatix/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo masmatix/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo masmatix/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo masmatix/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo masmatix/Llama-3.1-Nemotron-Nano-8B-v1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q4_k_m.gguf -c 2048
```
|
alvperez/skill-sim-model | alvperez | 2025-05-21T17:04:54Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"job-matching",
"skill-similarity",
"embeddings",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-21T16:35:48Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- job-matching
- skill-similarity
- embeddings
---
# alvperez/skill-sim-model
This is a fine-tuned [sentence-transformers](https://www.SBERT.net) model for **skill similarity** and **job matching**. It maps short skill phrases (e.g., `Python`, `Forklift Operation`, `Electrical Wiring`) into a 768-dimensional embedding space, where semantically related skills are closer together.
It can be used for:
- Matching candidates to job requirements
- Measuring similarity between skills
- Clustering and grouping skill sets
- Resume parsing or job recommendation systems
---
## 🧪 Usage (Sentence-Transformers)
To use this model:
```bash
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('alvperez/skill-sim-model')
skills = ["Electrical Wiring", "Circuit Troubleshooting", "Machine Learning"]
embeddings = model.encode(skills)
print(embeddings.shape) # (3, 768)
```
---
## 🧭 Evaluation Results
The model was evaluated on a labeled skill similarity dataset using the following metrics:
| Metric | Value |
|----------------------|---------|
| Spearman Correlation | 0.8612 |
| ROC AUC | 0.9127 |
These scores indicate strong alignment with human-labeled skill similarity ratings.
---
## 🧠 Training Details
The model was fine-tuned on a custom skill similarity dataset using `CosineSimilarityLoss`.
### **DataLoader**
`torch.utils.data.dataloader.DataLoader` of length 409 with parameters:
```python
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
### **Loss**
```python
sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss
```
### **Training Parameters**
```python
{
"epochs": 5,
"evaluation_steps": 100,
"evaluator": "EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "AdamW",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"warmup_steps": 100,
"weight_decay": 0.01
}
```
---
## 🧬 Model Architecture
```python
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
---
## 📚 Citation & Attribution
- Model fine-tuned by [@alvperez](https://huggingface.co/alvperez)
- Built with [Sentence-Transformers](https://www.sbert.net/)
- Inspired by semantic search and skill-matching use cases
|
BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmay5sqze03abu1cgzds1no35 | BootesVoid | 2025-05-21T17:02:25Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-21T17:02:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: chelsea
---
# Cmay2E8B8038Bu1Cguoswiyvb_Cmay5Sqze03Abu1Cgzds1No35
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `chelsea` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "chelsea",
"lora_weights": "https://huggingface.co/BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmay5sqze03abu1cgzds1no35/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmay5sqze03abu1cgzds1no35', weight_name='lora.safetensors')
image = pipeline('chelsea').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmay5sqze03abu1cgzds1no35/discussions) to add images that show off what you’ve made with this LoRA.
|
allura-forge/q3-30b-rc1-kto | allura-forge | 2025-05-21T17:00:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Gryphe/Pantheon-Proto-RP-1.8-30B-A3B",
"base_model:merge:Gryphe/Pantheon-Proto-RP-1.8-30B-A3B",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:merge:Qwen/Qwen3-30B-A3B",
"base_model:Qwen/Qwen3-30B-A3B-Base",
"base_model:merge:Qwen/Qwen3-30B-A3B-Base",
"base_model:allura-forge/q3-30b-ft-ep2-merged",
"base_model:merge:allura-forge/q3-30b-ft-ep2-merged",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:54:22Z | ---
base_model:
- Qwen/Qwen3-30B-A3B-Base
- allura-forge/q3-30b-ft-ep2-merged
- Qwen/Qwen3-30B-A3B
- Gryphe/Pantheon-Proto-RP-1.8-30B-A3B
library_name: transformers
tags:
- mergekit
- merge
---
# output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen3-30B-A3B-Base](https://huggingface.co/Qwen/Qwen3-30B-A3B-Base) as a base.
### Models Merged
The following models were included in the merge:
* [allura-forge/q3-30b-ft-ep2-merged](https://huggingface.co/allura-forge/q3-30b-ft-ep2-merged)
* [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
* [Gryphe/Pantheon-Proto-RP-1.8-30B-A3B](https://huggingface.co/Gryphe/Pantheon-Proto-RP-1.8-30B-A3B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen3-30B-A3B-Base
models:
- model: allura-forge/q3-30b-ft-ep2-merged
parameters:
select_topk: 0.75
- model: Gryphe/Pantheon-Proto-RP-1.8-30B-A3B
parameters:
select_topk: 0.4
- model: Qwen/Qwen3-30B-A3B
parameters:
select_topk: 0.25
merge_method: sce
dtype: bfloat16
```
|
bartowski/mistralai_Devstral-Small-2505-GGUF | bartowski | 2025-05-21T16:59:06Z | 0 | 0 | null | [
"gguf",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2025-05-21T14:37:41Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: mistralai/Devstral-Small-2505
inference: false
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model_relation: quantized
license: apache-2.0
---
## Llamacpp imatrix Quantizations of Devstral-Small-2505 by mistralai
### Created using Unsloth's conversion (huge thanks for uploading it) from here: https://huggingface.co/unsloth/Devstral-Small-2505
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5432">b5432</a> for quantization.
Original model: https://huggingface.co/mistralai/Devstral-Small-2505
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Vision support thanks to ngxson
ngxson has shown that the original mmproj from Mistral Small 3.1 works seamlessly with Devstral Small, so the mmproj file is here for vision use!
Note this is not official, and may not work perfectly, but in practice seems functional!
## Prompt format
```
<s>[SYSTEM_PROMPT]{system_prompt}[/SYSTEM_PROMPT][INST]{prompt}[/INST]
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Devstral-Small-2505-bf16.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-bf16.gguf) | bf16 | 47.15GB | false | Full BF16 weights. |
| [Devstral-Small-2505-Q8_0.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q8_0.gguf) | Q8_0 | 25.05GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Devstral-Small-2505-Q6_K_L.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q6_K_L.gguf) | Q6_K_L | 19.67GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Devstral-Small-2505-Q6_K.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q6_K.gguf) | Q6_K | 19.35GB | false | Very high quality, near perfect, *recommended*. |
| [Devstral-Small-2505-Q5_K_L.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q5_K_L.gguf) | Q5_K_L | 17.18GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Devstral-Small-2505-Q5_K_M.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q5_K_M.gguf) | Q5_K_M | 16.76GB | false | High quality, *recommended*. |
| [Devstral-Small-2505-Q5_K_S.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q5_K_S.gguf) | Q5_K_S | 16.30GB | false | High quality, *recommended*. |
| [Devstral-Small-2505-Q4_1.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q4_1.gguf) | Q4_1 | 14.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Devstral-Small-2505-Q4_K_L.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q4_K_L.gguf) | Q4_K_L | 14.83GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Devstral-Small-2505-Q4_K_M.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q4_K_M.gguf) | Q4_K_M | 14.33GB | false | Good quality, default size for most use cases, *recommended*. |
| [Devstral-Small-2505-Q4_K_S.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q4_K_S.gguf) | Q4_K_S | 13.55GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Devstral-Small-2505-Q4_0.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q4_0.gguf) | Q4_0 | 13.49GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Devstral-Small-2505-IQ4_NL.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ4_NL.gguf) | IQ4_NL | 13.47GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Devstral-Small-2505-Q3_K_XL.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q3_K_XL.gguf) | Q3_K_XL | 12.99GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Devstral-Small-2505-IQ4_XS.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ4_XS.gguf) | IQ4_XS | 12.76GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Devstral-Small-2505-Q3_K_L.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q3_K_L.gguf) | Q3_K_L | 12.40GB | false | Lower quality but usable, good for low RAM availability. |
| [Devstral-Small-2505-Q3_K_M.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q3_K_M.gguf) | Q3_K_M | 11.47GB | false | Low quality. |
| [Devstral-Small-2505-IQ3_M.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ3_M.gguf) | IQ3_M | 10.65GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Devstral-Small-2505-Q3_K_S.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q3_K_S.gguf) | Q3_K_S | 10.40GB | false | Low quality, not recommended. |
| [Devstral-Small-2505-IQ3_XS.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ3_XS.gguf) | IQ3_XS | 9.91GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Devstral-Small-2505-Q2_K_L.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q2_K_L.gguf) | Q2_K_L | 9.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Devstral-Small-2505-IQ3_XXS.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ3_XXS.gguf) | IQ3_XXS | 9.28GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Devstral-Small-2505-Q2_K.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-Q2_K.gguf) | Q2_K | 8.89GB | false | Very low quality but surprisingly usable. |
| [Devstral-Small-2505-IQ2_M.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ2_M.gguf) | IQ2_M | 8.11GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Devstral-Small-2505-IQ2_S.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ2_S.gguf) | IQ2_S | 7.48GB | false | Low quality, uses SOTA techniques to be usable. |
| [Devstral-Small-2505-IQ2_XS.gguf](https://huggingface.co/bartowski/mistralai_Devstral-Small-2505-GGUF/blob/main/mistralai_Devstral-Small-2505-IQ2_XS.gguf) | IQ2_XS | 7.21GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/mistralai_Devstral-Small-2505-GGUF --include "mistralai_Devstral-Small-2505-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/mistralai_Devstral-Small-2505-GGUF --include "mistralai_Devstral-Small-2505-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (mistralai_Devstral-Small-2505-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
beyoru/Spark-TTS-0.5B-with-FireSpark-10 | beyoru | 2025-05-21T16:57:44Z | 12 | 0 | null | [
"safetensors",
"text-to-speech",
"en",
"zh",
"base_model:SparkAudio/Spark-TTS-0.5B",
"base_model:finetune:SparkAudio/Spark-TTS-0.5B",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-speech | 2025-05-20T18:54:48Z | ---
license: cc-by-nc-sa-4.0
language:
- en
- zh
tags:
- text-to-speech
library_tag: spark-tts
base_model:
- SparkAudio/Spark-TTS-0.5B
---
|
beyoru/Spark-TTS-0.5B-with-KafSpark | beyoru | 2025-05-21T16:57:26Z | 13 | 0 | null | [
"safetensors",
"text-to-speech",
"en",
"zh",
"base_model:SparkAudio/Spark-TTS-0.5B",
"base_model:finetune:SparkAudio/Spark-TTS-0.5B",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-speech | 2025-05-20T19:15:55Z | ---
license: cc-by-nc-sa-4.0
language:
- en
- zh
tags:
- text-to-speech
library_tag: spark-tts
base_model:
- SparkAudio/Spark-TTS-0.5B
---
|
Gabriel2502/NIH_Xrays-Florence-2-FT-DocVQA | Gabriel2502 | 2025-05-21T16:57:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-21T16:45:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/mistralai.Devstral-Small-2505-GGUF | DevQuasar | 2025-05-21T16:56:35Z | 0 | 0 | null | [
"text-generation",
"base_model:mistralai/Devstral-Small-2505",
"base_model:finetune:mistralai/Devstral-Small-2505",
"region:us"
] | text-generation | 2025-05-21T16:56:34Z | ---
base_model:
- mistralai/Devstral-Small-2505
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [mistralai/Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
ibuki95/model5 | ibuki95 | 2025-05-21T16:56:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-21T13:47:29Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
Ainxz/phi3.5-pucv | Ainxz | 2025-05-21T16:56:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T19:58:05Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ainxz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF | mradermacher | 2025-05-21T16:55:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:yangjie-cv/WeThink-Qwen2.5VL-7B",
"base_model:quantized:yangjie-cv/WeThink-Qwen2.5VL-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-21T16:03:55Z | ---
base_model: yangjie-cv/WeThink-Qwen2.5VL-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/yangjie-cv/WeThink-Qwen2.5VL-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/WeThink-Qwen2.5VL-7B-i1-GGUF/resolve/main/WeThink-Qwen2.5VL-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
desllre/ru_news_detection | desllre | 2025-05-21T16:55:12Z | 0 | 0 | null | [
"safetensors",
"bert",
"rubert",
"rubert-tiny",
"text-classification",
"russian",
"social-media",
"news",
"fine-tuned",
"taiga",
"ru",
"dataset:Taiga",
"base_model:cointegrated/rubert-tiny2",
"base_model:finetune:cointegrated/rubert-tiny2",
"license:mit",
"region:us"
] | text-classification | 2025-05-21T16:20:01Z | ---
language: ru
license: mit
tags:
- rubert
- rubert-tiny
- text-classification
- russian
- social-media
- news
- fine-tuned
- taiga
metrics:
- accuracy
- precision
- recall
- f1
base_model: cointegrated/rubert-tiny2
datasets:
- Taiga
---
## Russian news detection
### About
- Model based on `cointegrated/rubert-tiny2`
- Further training of the model took place on a set of texts of social networks and news texts of the corpus of texts [Taiga](https://tatianashavrina.github.io/taiga_site /)
- Estimates of the accuracy of the model in the validation sample:
| Accuracy | Precision | Recall | F1-score |
| -------- | --------- | -------- | -------- |
| 0.996342 | 0.999747 | 0.993717 | 0.996723 |
### Getting started
```python
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pickle
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_path = 'desllre/ru_news_detection'
encoder_path = hf_hub_download(repo_id=model_path, filename="encoder.pkl")
with open(encoder_path, "rb") as f:
encoder = pickle.load(f)
tokenizer = AutoTokenizer.from_pretrained(model_path)
classifier = AutoModelForSequenceClassification.from_pretrained(model_path).to(device)
text = 'Tesla дала добро на взлом ПО своих автомобилей\n\nКомпания изменила условия программы Bug Bounty, предусматривающей выплату вознаграждений за поиск уязвимостей. Теперь энтузиасты могут взламывать электрокары Tesla, не боясь отзыва гарантии. Более того, в соответствии с новой политикой компании, автопроизводитель будет перепрошивать автомобили, ПО которых вышло из строя в процессе экспериментов специалистов кибербезопасности.\n\nИзменения в политике компании Telsa очень тепло встретили представители индустрии.'
tokenized = tokenize_function(text, news_tokenizer)
tokenized = {key: value.to(device) for key, value in tokenized.items()}
with torch.no_grad():
output = classifier(**tokenized)
predicted_class_id = torch.argmax(output.logits, dim=1).item()
label = encoder.inverse_transform([predicted_class_id])[0]
print(label)
```
|
gavrilstep/95409e72-553b-407c-8724-3b48ac7fb3b9 | gavrilstep | 2025-05-21T16:52:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-21T16:41:06Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 95409e72-553b-407c-8724-3b48ac7fb3b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 8238689af7edb3c9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8238689af7edb3c9_train_data.json
type:
field_instruction: system
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: gavrilstep/95409e72-553b-407c-8724-3b48ac7fb3b9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.01
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/8238689af7edb3c9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dd656613-2166-41f4-8840-76ceb5e9b641
wandb_project: s56-7
wandb_run: your_name
wandb_runid: dd656613-2166-41f4-8840-76ceb5e9b641
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 95409e72-553b-407c-8724-3b48ac7fb3b9
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2883 | 0.0098 | 150 | 1.8758 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ilybawkugo/lora_qwen_2e-4-1616-512 | ilybawkugo | 2025-05-21T16:51:16Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T15:49:14Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ilybawkugo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Gitanjali1801/ctrl_b_and_b-2 | Gitanjali1801 | 2025-05-21T16:50:26Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-15T17:25:39Z | ---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-Gitanjali1801/ctrl_b_and_b-2
These are controlnet weights trained on stable-diffusion-v1-5/stable-diffusion-v1-5 with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
matatonic/Devstral-Small-2505-5.0bpw-exl2 | matatonic | 2025-05-21T16:48:52Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"text2text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"5-bit",
"exl2",
"region:us"
] | text2text-generation | 2025-05-21T16:47:56Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Devstral-Small-2505
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text2text-generation
---
# Model Card for mistralai/Devstrall-Small-2505
Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed.
For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral).
## Key Features:
- **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
- **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
- **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window**: A 128k context window.
- **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
### SWE-Bench
Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%.
| Model | Scaffold | SWE-Bench Verified (%) |
|------------------|--------------------|------------------------|
| Devstral | OpenHands Scaffold | **46.8** |
| GPT-4.1-mini | OpenAI Scaffold | 23.6 |
| Claude 3.5 Haiku | Anthropic Scaffold | 40.6 |
| SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 |
When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.

## Usage
We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold.
You can use it either through our API or by running locally.
### API
Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key.
Then run these commands to start the OpenHands docker container.
```bash
export MISTRAL_API_KEY=<MY_KEY>
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik
mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.39
```
### Local inference
You can also run the model locally. It can be done with LMStudio or other providers listed below.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
The server will start at http://0.0.0.0:3000. Open it in your browser and you will see a tab AI Provider Configuration.
Now you can start a new conversation with the agent by clicking on the plus sign on the left bar.
The model can also be deployed with the following libraries:
- [`LMStudio (recommended for quantized model)`](https://lmstudio.ai/): See [here](#lmstudio-recommended-for-quantized-model)
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
### OpenHands (recommended)
#### Launch a server to deploy Devstral-Small-2505
Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`.
In the case of the tutorial we spineed up a vLLM server running the command:
```bash
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
The server address should be in the following format: `http://<your-server-url>:8000/v1`
#### Launch OpenHands
You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation).
The easiest way to launch OpenHands is to use the Docker image:
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Then, you can access the OpenHands UI at `http://localhost:3000`.
#### Connect to the server
When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.
Fill the following fields:
- **Custom Model**: `openai/mistralai/Devstral-Small-2505`
- **Base URL**: `http://<your-server-url>:8000/v1`
- **API Key**: `token` (or any other token you used to launch the server if any)
#### Use OpenHands powered by Devstral
Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app.
<details>
<summary>To-Do list app</summary
1. Let's ask Devstral to generate the app with the following prompt:
```txt
Build a To-Do list app with the following requirements:
- Built using FastAPI and React.
- Make it a one page app that:
- Allows to add a task.
- Allows to delete a task.
- Allows to mark a task as done.
- Displays the list of tasks.
- Store the tasks in a SQLite database.
```

2. Let's see the result
You should see the agent construct the app and be able to explore the code it generated.
If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app.


3. Iterate
Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status.
Enjoy building with Devstral Small and OpenHands!
</details>
### LMStudio (recommended for quantized model)
Download the weights from huggingface:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2505_gguf" \
--include "devstralQ4_K_M.gguf" \
--local-dir "mistralai/Devstral-Small-2505_gguf/"
```
You can serve the model locally with [LMStudio](https://lmstudio.ai/).
* Download [LM Studio](https://lmstudio.ai/) and install it
* Install `lms cli ~/.lmstudio/bin/lms bootstrap`
* In a bash terminal, run `lms import devstralQ4_K_M.ggu` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
* Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting oggle Serve on Local Network to be on.
* On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Click “see advanced setting” on the second line.
In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Devstral in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Devstral-Small-2505"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "<your-command>",
},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
```
### Mistral-inference
We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.
#### Install
Make sure to have mistral_inference >= 1.6.0 installed.
```bash
pip install mistral_inference --upgrade
```
#### Download
```python
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Python
You can run the model using the following command:
```bash
mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
```
You can then prompt it with anything you'd like.
### Ollama
You can run Devstral using the [Ollama](https://ollama.ai/) CLI.
```bash
ollama run devstral
```
### Transformers
To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
```python
import torch
from mistral_common.protocol.instruct.messages import (
SystemMessage, UserMessage
)
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
from huggingface_hub import hf_hub_download
from transformers import AutoModelForCausalLM
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Devstral-Small-2505"
tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json")
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_file(tekken_file)
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenized = tokenizer.encode_chat_completion(
ChatCompletionRequest(
messages=[
SystemMessage(content=SYSTEM_PROMPT),
UserMessage(content="<your-command>"),
],
)
)
output = model.generate(
input_ids=torch.tensor([tokenized.tokens]),
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)
``` |
DanielNRU/pollen-ner-1700 | DanielNRU | 2025-05-21T16:47:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
] | null | 2025-05-20T17:05:45Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-1700
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-1700
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1476
- Precision: 0.8615
- Recall: 0.9116
- F1: 0.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 213 | 0.1437 | 0.8626 | 0.9076 | 0.8845 |
| No log | 2.0 | 426 | 0.1461 | 0.8642 | 0.9076 | 0.8854 |
| 0.1816 | 3.0 | 639 | 0.1476 | 0.8615 | 0.9116 | 0.8859 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Fizzarolli/q3-30b-rc1-kto-adpt | Fizzarolli | 2025-05-21T16:46:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:allura-forge/q3-30b-rc1",
"base_model:adapter:allura-forge/q3-30b-rc1",
"region:us"
] | null | 2025-05-21T16:46:19Z | ---
base_model: allura-forge/q3-30b-rc1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
angrotanak/results | angrotanak | 2025-05-21T16:44:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T07:17:07Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.1
- Pytorch 2.6.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
OL-OL/llama3-vision-graphs-5000_finetuned | OL-OL | 2025-05-21T16:43:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-05-21T16:41:17Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SamanthaStorm/tether-darvo-regressor-v1 | SamanthaStorm | 2025-05-21T16:42:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T16:41:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aw1605/smoltalk_sft | aw1605 | 2025-05-21T16:40:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:39:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rubricreward/R3-Phi-4-reasoning-plus-LoRA-4k | rubricreward | 2025-05-21T16:39:56Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"lora",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-4K",
"arxiv:2505.13388",
"base_model:microsoft/Phi-4-reasoning-plus",
"base_model:adapter:microsoft/Phi-4-reasoning-plus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-16T06:33:04Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-4K
base_model:
- microsoft/Phi-4-reasoning-plus
pipeline_tag: text-generation
library_name: transformers
tags:
- lora
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Phi-4-reasoning-plus-LoRA-4k
R3-Phi-4-reasoning-plus-LoRA-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** microsoft/Phi-4-reasoning-plus
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Phi-4-reasoning-plus-LoRA-4k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=32768, min_p=0, top_k=50)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
rubricreward/R3-Qwen3-4B-14k | rubricreward | 2025-05-21T16:38:58Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-14K",
"arxiv:2505.13388",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-14T08:24:50Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-14K
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Qwen3-4B-14k
R3-Qwen3-4B-14k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-4B
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Qwen3-4B-14k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
mlx-community/Devstral-Small-2505-3bit | mlx-community | 2025-05-21T16:38:56Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"3-bit",
"region:us"
] | text-generation | 2025-05-21T15:42:41Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mistralai/Devstral-Small-2505
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/Devstral-Small-2505-3bit
This model [mlx-community/Devstral-Small-2505-3bit](https://huggingface.co/mlx-community/Devstral-Small-2505-3bit) was
converted to MLX format from [mistralai/Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Devstral-Small-2505-3bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rubricreward/R3-Qwen3-4B-LoRA-4k | rubricreward | 2025-05-21T16:38:23Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"lora",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-4K",
"arxiv:2505.13388",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-20T02:56:45Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-4K
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
tags:
- lora
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Qwen3-4B-LoRA-4k
R3-Qwen3-4B-LoRA-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-4B
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Qwen3-4B-LoRA-4k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
mlx-community/Devstral-Small-2505-4bit | mlx-community | 2025-05-21T16:38:23Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-05-21T15:49:11Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mistralai/Devstral-Small-2505
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/Devstral-Small-2505-4bit
This model [mlx-community/Devstral-Small-2505-4bit](https://huggingface.co/mlx-community/Devstral-Small-2505-4bit) was
converted to MLX format from [mistralai/Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Devstral-Small-2505-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rubricreward/R3-Qwen3-8B-14k | rubricreward | 2025-05-21T16:38:08Z | 30 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-14K",
"arxiv:2505.13388",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-14T09:09:06Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-14K
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
library_name: transformers
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Qwen3-8B-14k
R3-Qwen3-8B-14k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-8B
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Qwen3-8B-14k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
rubricreward/R3-Qwen3-8B-4k | rubricreward | 2025-05-21T16:37:46Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-4K",
"arxiv:2505.13388",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-14T09:01:26Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-4K
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
library_name: transformers
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Qwen3-8B-4k
R3-Qwen3-8B-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-8B
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Qwen3-8B-4k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
EXOAI3/Tao | EXOAI3 | 2025-05-21T16:37:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T16:36:30Z | ---
license: apache-2.0
---
|
rubricreward/R3-Qwen3-14B-14k | rubricreward | 2025-05-21T16:37:15Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-14K",
"arxiv:2505.13388",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-14T15:46:14Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-14K
base_model:
- Qwen/Qwen3-14B
pipeline_tag: text-generation
library_name: transformers
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Qwen3-14B-14k
R3-Qwen3-14B-14k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-14B
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Qwen3-14B-14k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
mlx-community/Devstral-Small-2505-bf16 | mlx-community | 2025-05-21T16:37:12Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:finetune:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-21T15:56:40Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mistralai/Devstral-Small-2505
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/Devstral-Small-2505-bf16
This model [mlx-community/Devstral-Small-2505-bf16](https://huggingface.co/mlx-community/Devstral-Small-2505-bf16) was
converted to MLX format from [mistralai/Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Devstral-Small-2505-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rubricreward/R3-Qwen3-14B-4k | rubricreward | 2025-05-21T16:37:01Z | 30 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:rubricreward/R3-Dataset-4K",
"arxiv:2505.13388",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-12T18:15:35Z | ---
license: apache-2.0
language:
- en
datasets:
- rubricreward/R3-Dataset-4K
base_model:
- Qwen/Qwen3-14B
pipeline_tag: text-generation
library_name: transformers
---
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
# R3-Qwen3-14B-4k
R3-Qwen3-14B-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
## Model description
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
evaluation rubrics, and a score along with the corresponding reasoning.
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen/Qwen3-14B
### Model Sources
- **Project Page:** https://rubricreward.github.io
- **Repository:** https://github.com/rubricreward/r3
- **Paper:** https://arxiv.org/abs/2505.13388
## Using the Model
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path = "rubricreward/R3-Qwen3-14B-4k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=10000,
tensor_parallel_size=2,
gpu_memory_utilization=0.9,
enforce_eager=True,
)
messages: list[dict[str, str]] = [
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
]
list_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switch between thinking and non-thinking modes.
)
outputs = llm.generate(list_text, sampling_params)
```
## License and use
R3 is licensed under the Apache 2.0 license.
## Citation
```bibtex
@article{anugraha2025r3,
title={R3: Robust Rubric-Agnostic Reward Models},
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
journal={arXiv preprint arXiv:2505.13388},
year={2025}
}
``` |
manuth/wer7_augPitch | manuth | 2025-05-21T16:36:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"khm",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-21T16:31:02Z | ---
library_name: transformers
pipeline_tag: automatic-speech-recognition
language:
- khm
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Finetuned for Khmer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Finetuned for Khmer
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0204
- Wer: 0.0998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0252 | 0.7722 | 200 | 0.0256 | 0.1182 |
| 0.0193 | 1.5444 | 400 | 0.0219 | 0.1037 |
| 0.0099 | 2.3166 | 600 | 0.0204 | 0.0998 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
mod479711/starvector-1b-im2svg-GGUF | mod479711 | 2025-05-21T16:36:03Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"svg",
"text-generation",
"en",
"base_model:starvector/starvector-1b-im2svg",
"base_model:quantized:starvector/starvector-1b-im2svg",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-20T16:42:37Z | ---
license: apache-2.0
language:
- en
base_model:
- starvector/starvector-1b-im2svg
pipeline_tag: text-generation
tags:
- svg
library_name: transformers
---
### Starvector-GGUF

Квантованная версия https://huggingface.co/starvector/starvector-1b-im2svg . Модель пока нигде нельзя использовать полноценно, но ведётся работа по модификации llama.cpp для внедрения поддержки
# Llama.cpp for Starvector
https://github.com/mod47971/llama.cpp-advanced_arch
| | Starvector-1b |Starvector-8b |
| --- | --- | --- |
| Quantization | ✅ | ❌ |
| Inference | ? (I haven't tested it yet, but theoretically it should work)| ❌ | |
EmreGed/sunergy8bit8e | EmreGed | 2025-05-21T16:34:55Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T16:30:37Z | ---
license: apache-2.0
---
|
iTroned/self_iterative_v2_targeted_iteration_1 | iTroned | 2025-05-21T16:31:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T05:07:42Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: self_iterative_v2_targeted_iteration_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/vc8gvd7c)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/vc8gvd7c)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/vc8gvd7c)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/vc8gvd7c)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/vc8gvd7c)
# self_iterative_v2_targeted_iteration_1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5896
- Accuracy Targeted: 0.6822
- F1 Macro Targeted: 0.4884
- F1 Weighted Targeted: 0.6003
- F1 Macro Total: 0.4884
- F1 Weighted Total: 0.6003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Targeted | F1 Macro Targeted | F1 Weighted Targeted | F1 Macro Total | F1 Weighted Total |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:-----------------:|:--------------------:|:--------------:|:-----------------:|
| 0.3375 | 1.0 | 5899 | 4.8907 | 0.6778 | 0.4040 | 0.5476 | 0.4040 | 0.5476 |
| 0.2568 | 2.0 | 11798 | 4.9433 | 0.6778 | 0.4040 | 0.5476 | 0.4040 | 0.5476 |
| 0.2147 | 3.0 | 17697 | 5.5877 | 0.6889 | 0.4516 | 0.5799 | 0.4516 | 0.5799 |
| 0.1024 | 4.0 | 23596 | 6.9925 | 0.6733 | 0.4271 | 0.5607 | 0.4271 | 0.5607 |
| 0.1134 | 5.0 | 29495 | 6.5896 | 0.6822 | 0.4884 | 0.6003 | 0.4884 | 0.6003 |
| 0.0407 | 6.0 | 35394 | 7.8903 | 0.6733 | 0.4694 | 0.5863 | 0.4694 | 0.5863 |
| 0.024 | 7.0 | 41293 | 6.2429 | 0.6756 | 0.4451 | 0.5722 | 0.4451 | 0.5722 |
| 0.0105 | 8.0 | 47192 | 9.8105 | 0.68 | 0.4360 | 0.5679 | 0.4360 | 0.5679 |
| 0.0334 | 9.0 | 53091 | 9.7531 | 0.6689 | 0.4194 | 0.5547 | 0.4194 | 0.5547 |
| 0.0161 | 10.0 | 58990 | 11.4841 | 0.6667 | 0.4408 | 0.5672 | 0.4408 | 0.5672 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
beyoru/Kafka-Spark | beyoru | 2025-05-21T16:30:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:27:22Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hlhs211/aphasia_gemma3_27b | hlhs211 | 2025-05-21T16:30:28Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T16:06:38Z | ---
license: apache-2.0
---
|
darwinha/FineLlama-3.1-8B-alpaca200 | darwinha | 2025-05-21T16:28:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:24:02Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** darwinha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yeniguno/opus-mt-en-tr-kafkaesque | yeniguno | 2025-05-21T16:26:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"tr",
"base_model:Helsinki-NLP/opus-mt-tc-big-en-tr",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-05-21T16:18:44Z | ---
library_name: transformers
tags:
- translation
license: apache-2.0
language:
- en
- tr
metrics:
- bleu
base_model:
- Helsinki-NLP/opus-mt-tc-big-en-tr
pipeline_tag: translation
---
# Model Card for **yeniguno/marianmt-en-tr-kafkaesque**
A fine-tuned **MarianMT** model that translates **English prose into Turkish with a deliberate “Kafkaesque” flavour**.
The checkpoint starts from the bilingual **Helsinki-NLP/opus-mt-en-tr** base model and is further trained on ~10 k parallel sentences taken from published Turkish & English versions of Franz Kafka’s works.
The goal was purely experimental:
> *Can a compact MT model be nudged toward a specific literary voice by exposing it to a small, style-consistent corpus?*
---
## Model Details
| | |
|---|---|
| **Base architecture** | MarianMT (Transformer encoder-decoder) |
| **Source languages** | `en` (contemporary English) |
| **Target language** | `tr` (modern Turkish) |
| **Training corpus** | 10 014 sentence pairs manually aligned from Turkish editions of Kafka’s short stories and their authorised English translations |
| **Framework** | 🤗 Transformers ≥ 4.40 |
| **License** | Apache-2.0 for the *model code + weights* ✧ ⚠️ Translations used for fine-tuning may still be under copyright; see *“Data & Copyright”* below |
---
## Intended Uses & Scope
| **You can** | **You should not** |
|-------------|--------------------|
| Generate *draft* Turkish renderings of Kafka excerpts originally translated into English | Assume output is authoritative or publication-ready |
| Explore style-transfer / literary MT research | Rely on the model for technical, legal or medical translation |
| Use as a starting point for further stylistic fine-tuning | Expect high accuracy outside Kafka’s narrative domain |
---
## Training Procedure
* **Hardware:** 1× A100 40 GB (Google Colab Pro)
* **Hyper-params:** 5 epochs, batch 16 (eff.), LR 5 × 10⁻⁵, linear decay, warm-up 200 steps
* **Early stopping:** patience 3 (@ 500-step evals) monitored on BLEU
* **Best checkpoint:** step 2 500
* Train loss ≈ 0.42 → Val loss ≈ 1.01
* SacreBLEU (500-sent dev) **baseline 24.4 → tuned 31.8**
---
## Quick Start
```python
from transformers import MarianMTModel, MarianTokenizer
tr_en_model_name = "yeniguno/opus-mt-en-tr-kafkaesque"
tokenizer = MarianTokenizer.from_pretrained(tr_en_model_name)
model = MarianMTModel.from_pretrained(tr_en_model_name)
turkish_text ="My neighbor, at the same peculiar hour each night, left his room with a small, locked bag in hand."
inputs = tokenizer(turkish_text, return_tensors="pt", padding=True)
output_ids = model.generate(**inputs)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
``` |
ErsiZhao/example-model | ErsiZhao | 2025-05-21T16:26:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-21T01:02:49Z | Example model
this is my model card preview
---
license: mit
---
|
yeniguno/opus-mt-tr-en-kafkaesque | yeniguno | 2025-05-21T16:25:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"tr",
"en",
"base_model:Helsinki-NLP/opus-mt-tr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-tr-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-05-21T15:37:07Z | ---
library_name: transformers
tags:
- translation
license: apache-2.0
language:
- tr
- en
metrics:
- bleu
base_model:
- Helsinki-NLP/opus-mt-tr-en
pipeline_tag: translation
---
# Model Card
A fine-tuned **MarianMT** model that translates **Turkish prose into English with a deliberate “Kafkaesque” flavour**.
The checkpoint starts from the bilingual **Helsinki-NLP/opus-mt-tr-en** base model and is further trained on ~10 k parallel sentences taken from published Turkish & English versions of Franz Kafka’s works.
The goal was purely experimental:
> *Can a compact MT model be nudged toward a specific literary voice by exposing it to a small, style-consistent corpus?*
---
## Model Details
| | |
|---|---|
| **Base architecture** | MarianMT (Transformer encoder-decoder) |
| **Source languages** | `tr` (modern Turkish) |
| **Target language** | `en` (contemporary English) |
| **Training corpus** | 10 014 sentence pairs manually aligned from Turkish editions of Kafka’s short stories & *Die Verwandlung* and their authorised English translations |
| **Framework** | 🤗 Transformers ≥ 4.40 |
| **License** | Apache-2.0 for the *model code + weights* ✧ ⚠️ Translations used for fine-tuning may still be under copyright; see *“Data & Copyright”* below |
---
## Training Procedure
* **Hardware:** 1× A100 40 GB (Google Colab Pro)
* **Hyper-params:** 5 epochs, batch 16 (eff.), LR 5 × 10⁻⁵, linear decay, warm-up 200 steps
* **Early stopping:** patience 3 (@ 500-step evals) monitored on BLEU
* **Best checkpoint:** step 2 500
* Train loss ≈ 0.61 → Val loss ≈ 1.20
* SacreBLEU (500-sent dev) **baseline 24.7 → tuned 31.5**
---
## Quick Start
```python
from transformers import MarianMTModel, MarianTokenizer
tr_en_model_name = "yeniguno/opus-mt-tr-en-kafkaesque"
tokenizer = MarianTokenizer.from_pretrained(tr_en_model_name)
model = MarianMTModel.from_pretrained(tr_en_model_name)
turkish_text ="Komşum her gece tam aynı tuhaf saatte, elinde küçük, kilitli bir çantayla dairesinden çıkıyor."
inputs = tokenizer(turkish_text, return_tensors="pt", padding=True)
output_ids = model.generate(**inputs)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
``` |
bullerwins/Devstral-Small-2505-fp8 | bullerwins | 2025-05-21T16:21:49Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"text2text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | text2text-generation | 2025-05-21T16:02:52Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Devstral-Small-2505
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text2text-generation
---
Quantized to FP8 with [LLMCompressor](https://github.com/vllm-project/llm-compressor)
Ideal to run on a dual GPU system like 2x3090 with vLLM or SGlang:
`vllm serve bullerwins/Devstral-Small-2505-fp8 --max-model-len 16000 --host 0.0.0.0 --port 5000 -tp 2 --tokenizer_mode mistral`
# Devstral-Small-2505
Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed.
For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral).
## Key Features:
- **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
- **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
- **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window**: A 128k context window.
- **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
### SWE-Bench
Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%.
| Model | Scaffold | SWE-Bench Verified (%) |
|------------------|--------------------|------------------------|
| Devstral | OpenHands Scaffold | **46.8** |
| GPT-4.1-mini | OpenAI Scaffold | 23.6 |
| Claude 3.5 Haiku | Anthropic Scaffold | 40.6 |
| SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 |
When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.

## Usage
We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold.
You can use it either through our API or by running locally.
### API
Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key.
Then run these commands to start the OpenHands docker container.
```bash
export MISTRAL_API_KEY=<MY_KEY>
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik
mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.39
```
### Local inference
The model can also be deployed with the following libraries:
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio)
- [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
### OpenHands (recommended)
#### Launch a server to deploy Devstral-Small-2505
Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`.
In the case of the tutorial we spineed up a vLLM server running the command:
```bash
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
The server address should be in the following format: `http://<your-server-url>:8000/v1`
#### Launch OpenHands
You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation).
The easiest way to launch OpenHands is to use the Docker image:
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Then, you can access the OpenHands UI at `http://localhost:3000`.
#### Connect to the server
When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.
Fill the following fields:
- **Custom Model**: `openai/mistralai/Devstral-Small-2505`
- **Base URL**: `http://<your-server-url>:8000/v1`
- **API Key**: `token` (or any other token you used to launch the server if any)
#### Use OpenHands powered by Devstral
Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app.
<details>
<summary>To-Do list app</summary
1. Let's ask Devstral to generate the app with the following prompt:
```txt
Build a To-Do list app with the following requirements:
- Built using FastAPI and React.
- Make it a one page app that:
- Allows to add a task.
- Allows to delete a task.
- Allows to mark a task as done.
- Displays the list of tasks.
- Store the tasks in a SQLite database.
```

2. Let's see the result
You should see the agent construct the app and be able to explore the code it generated.
If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app.


3. Iterate
Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status.
Enjoy building with Devstral Small and OpenHands!
</details>
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Devstral in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Devstral-Small-2505"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "<your-command>",
},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
```
### Mistral-inference
We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.
#### Install
Make sure to have mistral_inference >= 1.6.0 installed.
```bash
pip install mistral_inference --upgrade
```
#### Download
```python
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Python
You can run the model using the following command:
```bash
mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
```
You can then prompt it with anything you'd like.
### Transformers
To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
```python
import torch
from mistral_common.protocol.instruct.messages import (
SystemMessage, UserMessage
)
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
from huggingface_hub import hf_hub_download
from transformers import AutoModelForCausalLM
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Devstral-Small-2505"
tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json")
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_file(tekken_file)
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenized = tokenizer.encode_chat_completion(
ChatCompletionRequest(
messages=[
SystemMessage(content=SYSTEM_PROMPT),
UserMessage(content="<your-command>"),
],
)
)
output = model.generate(
input_ids=torch.tensor([tokenized.tokens]),
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)
```
### LMStudio
Download the weights from huggingface:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2505_gguf" \
--include "devstralQ4_K_M.gguf" \
--local-dir "mistralai/Devstral-Small-2505_gguf/"
```
You can serve the model locally with [LMStudio](https://lmstudio.ai/).
* Download [LM Studio](https://lmstudio.ai/) and install it
* Install `lms cli ~/.lmstudio/bin/lms bootstrap`
* In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
* Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on.
* On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Click “see advanced setting” on the second line.
In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
### Ollama
You can run Devstral using the [Ollama](https://ollama.ai/) CLI.
```bash
ollama run devstral
``` |
vertings6/acf6dcee-7652-4c45-a805-328a7ea46762 | vertings6 | 2025-05-21T16:19:49Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"starcoder2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:bigcode/starcoder2-3b",
"base_model:quantized:bigcode/starcoder2-3b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T15:57:07Z | ---
base_model: bigcode/starcoder2-3b
library_name: transformers
model_name: acf6dcee-7652-4c45-a805-328a7ea46762
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for acf6dcee-7652-4c45-a805-328a7ea46762
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vertings6/acf6dcee-7652-4c45-a805-328a7ea46762", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/mjxr3uj3)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Poll027/my-lora-llama | Poll027 | 2025-05-21T16:18:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-21T16:18:33Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
YNnebesta/testFT-test_model | YNnebesta | 2025-05-21T16:17:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T16:17:27Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YNnebesta
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
concept-unlearning/Meta-Llama-3-8B_npo_gdr_lora_wmdp_bio_v2 | concept-unlearning | 2025-05-21T16:16:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T16:14:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
batmangiaicuuthegioi/bi-encoders-embeddings | batmangiaicuuthegioi | 2025-05-21T16:16:44Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:37059",
"loss:MultipleNegativesRankingLoss",
"dataset:batmangiaicuuthegioi/zalo-legal-triplets",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:AITeamVN/Vietnamese_Embedding",
"base_model:finetune:AITeamVN/Vietnamese_Embedding",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-21T16:15:28Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:37059
- loss:MultipleNegativesRankingLoss
base_model: AITeamVN/Vietnamese_Embedding
widget:
- source_sentence: Quản lý và sử dụng phí bảo vệ môi trường đối với nước thải công
nghiệp được quy định ra sao?
sentences:
- 'Điều 16. Trách nhiệm của Uỷ ban nhân dân cấp huyện, cấp xã nơi có đê. điểm c)
trang bị và hướng dẫn việc quản lý sử dụng các dụng cụ, sổ sách cho các đội tuần
tra, canh gác đê theo quy định tại khoản 2 điều 6 của thông tư này. '
- Điều 33. Quản lý tài khoản, tài sản ký quỹ của thành viên bù trừ. khoản 6. loại
ký quỹ, phương pháp xác định mức ký quỹ, phương thức ký quỹ, thời hạn ký quỹ,
bổ sung ký quỹ, chuyển giao tài sản ký quỹ, phương thức định giá tài sản ký quỹ,
xác định lãi lỗ vị thế, hoạt động quản lý tài khoản và tài sản ký quỹ của thành
viên bù trừ thực hiện theo quy định của bộ trưởng bộ tài chính và quy chế của
tổng công ty lưu ký và bù trừ chứng khoán việt nam.
- Điều 4. Nguyên tắc quản lý và sử dụng phí. khoản 3. phí thu từ các hoạt động
dịch vụ do tổ chức được cơ quan nhà nước có thẩm quyền giao thực hiện được để
lại một phần hoặc toàn bộ số tiền phí thu được để trang trải chi phí hoạt động
cung cấp dịch vụ, thu phí được xác định theo quy định tại điều 5 nghị định này;
phần còn lại (nếu có) nộp ngân sách nhà nước, trừ trường hợp chính phủ có quy
định khác thì thực hiện theo quy định của chính phủ. số tiền phí được để lại là
doanh thu của tổ chức thu phí.
- source_sentence: Ngày bầu cử đại biểu Quốc Hội có phải là ngày chủ nhật?
sentences:
- 'Điều 16. Cử quốc thiều nước Cộng hòa xã hội chủ nghĩa Việt Nam. khoản 1. quốc
thiều việt nam được cử trong các cuộc mít tinh, chiêu đãi chào mừng quốc khánh,
ngày lễ lớn của việt nam hoặc kỷ niệm sự kiện quan trọng trong quan hệ giữa việt
nam với quốc gia hay tổ chức quốc tế tiếp nhận phù hợp với quy định, thông lệ
lễ tân của quốc gia, tổ chức quốc tế tiếp nhận. '
- 'Điều 4. Giải thích từ ngữ. khoản 36. quản lý quỹ đầu tư chứng khoán là hoạt
động quản lý trong việc mua, bán, nắm giữ chứng khoán và các tài sản khác của
quỹ đầu tư chứng khoán. '
- 'Điều 52. Giới thiệu người của cơ quan, tổ chức, đơn vị ứng cử đại biểu Hội đồng
nhân dân. khoản 4. ban công tác mặt trận ở thôn, tổ dân phố dự kiến người của
thôn, tổ dân phố để giới thiệu ứng cử đại biểu hội đồng nhân dân cấp xã và phối
hợp với trưởng thôn, tổ trưởng tổ dân phố tổ chức hội nghị cử tri để thảo luận,
giới thiệu người ứng cử đại biểu hội đồng nhân dân cấp xã. việc giới thiệu người
ứng cử đại biểu hội đồng nhân dân cấp xã ở thôn, tổ dân phố do ủy ban thường vụ
quốc hội hướng dẫn; '
- source_sentence: Nghiên cứu y sinh học đa trung tâm là gì?
sentences:
- 'Điều 64. Vi phạm quy định về cung cấp, sử dụng thiết bị vô tuyến điện được miễn
Giấy phép sử dụng tần số vô tuyến điện. khoản 2. phạt tiền từ < mức phạt tiền
> đến < mức phạt tiền > đối với hành vi sản xuất hoặc nhập khẩu thiết bị vô tuyến
điện thuộc danh mục thiết bị vô tuyến điện được miễn giấy phép sử dụng tần số
vô tuyến điện nhưng không thực hiện chứng nhận và công bố hợp quy trước khi đưa
vào lưu thông trên thị trường. '
- 'Điều 3. Giải thích từ ngữ. khoản 19. nguy cơ (risk) là xác suất mà một sự kiện
hoặc kết quả thuận lợi hay bất lợi xảy ra trong một khoảng thời gian xác định
của nghiên cứu theo tiếp cận của dịch tễ. '
- 'Điều 9. Nội dung tuần tra, canh gác đê. điểm d) mỗi kíp tuần tra phải kiểm tra
vượt quá phạm vi phụ trách về hai phía, mỗi phía 50m. đối với những khu vực đã
từng xảy ra sự cố hư hỏng, phải kiểm tra quan sát rộng hơn để phát hiện sự cố. '
- source_sentence: Không treo biển thông báo không bán thuốc lá cho người dưới 18
tuổi phạt 1 triệu được quy định như thế nào?
sentences:
- 'Điều 49. Hành vi vi phạm về đăng ký hợp đồng theo mẫu, điều kiện giao dịch chung. điểm
c) không áp dụng đúng hợp đồng theo mẫu, điều kiện giao dịch chung đã đăng ký
với cơ quan quản lý nhà nước có thẩm quyền về bảo vệ quyền lợi người tiêu dùng
theo quy định. '
- Điều 15. Khen thưởng, kỷ Luật. khoản 2. những đơn vị và cá nhân vi phạm quy định
tại thông tư này tuỳ theo lỗi nặng nhẹ sẽ bị thi hành kỷ luật từ cảnh cáo đến
truy tố trước pháp luật của nhà nước.
- 'Điều 81. Tước quyền sử dụng giấy phép, chứng chỉ hành nghề có thời hạn hoặc đình
chỉ hoạt động có thời hạn trong lĩnh vực giao thông đường bộ, đường sắt. khoản
5. trường hợp người có hành vi vi phạm bị áp dụng hình thức xử phạt tước quyền
sử dụng giấy phép, chứng chỉ hành nghề nhưng thời hạn sử dụng còn lại của giấy
phép, chứng chỉ hành nghề đó ít hơn thời hạn bị tước thì người có thẩm quyền vẫn
ra quyết định xử phạt có áp dụng hình thức tước quyền sử dụng giấy phép, chứng
chỉ hành nghề theo quy định đối với hành vi vi phạm. trong thời gian bị tước quyền
sử dụng giấy phép, chứng chỉ hành nghề, cá nhân, tổ chức không được làm thủ tục
cấp đổi, cấp mới giấy phép, chứng chỉ hành nghề. '
- source_sentence: Quy định về trao đổi dữ liệu thi hành án hình sự được quy định
như thế nào?
sentences:
- Điều 13. Quy định về bàn giao giữa các kíp trực. sau mỗi đợt kiểm tra, các kíp
tuần tra, canh gác đê phải ghi chép đầy đủ tình hình diễn biến và hư hỏng đê điều
vào sổ nhật ký tuần tra, canh gác theo mẫu quy định và bàn giao đầy đủ cho kíp
sau. người thay mặt kíp giao và nhận phải ký và ghi rõ họ tên, ngày giờ vào sổ.
sau mỗi ngày đội trưởng và cán bộ chuyên trách quản lý đê điều ký xác nhận tình
hình trong ngày để theo dõi và làm cơ sở cho việc chi trả thù lao theo quy định.
- 'Điều 33. Báo cáo của tổ chức tư vấn hồ sơ chào bán trái phiếu, tổ chức đấu thầu,
bảo lãnh, đại lý phát hành, tổ chức đăng ký, lưu ký trái phiếu và Sở giao dịch
chứng khoán. điểm b) ngoài chế độ báo cáo định kỳ theo quy định tại điểm a khoản
này, sở giao dịch chứng khoán báo cáo đột xuất cho ủy ban chứng khoán nhà nước
và bộ tài chính theo yêu cầu của cơ quan quản lý. '
- 'Điều 12. Trao đổi dữ liệu giữa cơ sở dữ liệu về thi hành án hình sự và các cơ
sở dữ liệu khác liên quan. khoản 1. việc trao đổi dữ liệu giữa cơ sở dữ liệu
về thi hành án hình sự và các cơ sở dữ liệu khác liên quan phải thực hiện theo
quy định của pháp luật và quy định của bộ công an, bộ quốc phòng. '
datasets:
- batmangiaicuuthegioi/zalo-legal-triplets
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on AITeamVN/Vietnamese_Embedding
results:
- task:
type: triplet
name: Triplet
dataset:
name: zalo legal
type: zalo_legal
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
---
# SentenceTransformer based on AITeamVN/Vietnamese_Embedding
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [AITeamVN/Vietnamese_Embedding](https://huggingface.co/AITeamVN/Vietnamese_Embedding) on the [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [AITeamVN/Vietnamese_Embedding](https://huggingface.co/AITeamVN/Vietnamese_Embedding) <!-- at revision 9f671cc30908f1d851787efcc05b7d15bad8b615 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("batmangiaicuuthegioi/bi-encoders-embeddings")
# Run inference
sentences = [
'Quy định về trao đổi dữ liệu thi hành án hình sự được quy định như thế nào?',
'Điều 12. Trao đổi dữ liệu giữa cơ sở dữ liệu về thi hành án hình sự và các cơ sở dữ liệu khác liên quan. khoản 1. việc trao đổi dữ liệu giữa cơ sở dữ liệu về thi hành án hình sự và các cơ sở dữ liệu khác liên quan phải thực hiện theo quy định của pháp luật và quy định của bộ công an, bộ quốc phòng. ',
'Điều 13. Quy định về bàn giao giữa các kíp trực. sau mỗi đợt kiểm tra, các kíp tuần tra, canh gác đê phải ghi chép đầy đủ tình hình diễn biến và hư hỏng đê điều vào sổ nhật ký tuần tra, canh gác theo mẫu quy định và bàn giao đầy đủ cho kíp sau. người thay mặt kíp giao và nhận phải ký và ghi rõ họ tên, ngày giờ vào sổ. sau mỗi ngày đội trưởng và cán bộ chuyên trách quản lý đê điều ký xác nhận tình hình trong ngày để theo dõi và làm cơ sở cho việc chi trả thù lao theo quy định.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `zalo_legal`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Triplet
* Dataset: `zalo_legal`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### zalo-legal-triplets
* Dataset: [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets) at [15e0566](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets/tree/15e0566d390f73b5574a3d928cb8353cb6656fba)
* Size: 37,059 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 22.08 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 82.98 tokens</li><li>max: 344 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 76.65 tokens</li><li>max: 220 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Mức phạt đối với hành vi điều khiển xe máy dẫn, dắt theo súc vật ?</code> | <code>Điều 63. Xử phạt nhân viên đường sắt trực tiếp phục vụ chạy tàu (trừ lái tàu và phụ lái tàu) vi phạm quy định về nồng độ cồn hoặc sử dụng các chất kích thích khác mà pháp luật cấm sử dụng. điểm c) khi làm nhiệm vụ mà trong cơ thể có chất kích thích khác mà pháp luật cấm sử dụng.</code> | <code>Điều 4. Nhiệm vụ của lực lượng tuần tra, canh gác đê. khoản 5. đeo phù hiệu khi làm nhiệm vụ.</code> |
| <code>Theo quy định pháp luật, dẫn xuất của các loài động vật, thực vật là gì?</code> | <code>Điều 3. Giải thích từ ngữ. khoản 26. mẫu vật săn bắt là mẫu vật có được từ các hoạt động săn bắt hợp pháp. </code> | <code>Điều 17. Trách nhiệm của Sở Nông nghiệp và Phát triển nông thôn. khoản 3. khi có báo động lũ từ cấp i trở lên, sở nông nghiệp và phát triển nông thôn phải chỉ đạo, tổ chức kiểm tra, đôn đốc công tác tuần tra, canh gác ở các tuyến đê.</code> |
| <code>Mục tiêu của giáo dục nghề nghiệp từ tháng 7/2020 được quy định như thế nào?</code> | <code>Điều 36. Mục tiêu của giáo dục nghề nghiệp. giáo dục nghề nghiệp nhằm đào tạo nhân lực trực tiếp cho sản xuất, kinh doanh và dịch vụ, có năng lực hành nghề tương ứng với trình độ đào tạo; có đạo đức, sức khỏe; có trách nhiệm nghề nghiệp; có khả năng sáng tạo, thích ứng với môi trường hội nhập quốc tế; bảo đảm nâng cao năng suất, chất lượng lao động; tạo điều kiện cho người học sau khi hoàn thành khóa học có khả năng tìm việc làm, tự tạo việc làm hoặc học trình độ cao hơn.</code> | <code>Điều 3. Tiêu chuẩn của các thành viên thuộc lực lượng tuần tra, canh gác đê. khoản 2. có tinh thần trách nhiệm, chịu đựng gian khổ, khắc phục khó khăn, quen sông nước và biết bơi, có kiến thức, kinh nghiệm hộ đê, phòng, chống lụt, bão.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### zalo-legal-triplets
* Dataset: [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets) at [15e0566](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets/tree/15e0566d390f73b5574a3d928cb8353cb6656fba)
* Size: 37,059 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 21.7 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.22 tokens</li><li>max: 327 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 74.1 tokens</li><li>max: 220 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Nghiên cứu y sinh học liên quan đến con người là gì?</code> | <code>Điều 31. Thẩm định nghiên cứu theo quy trình rút gọn. khoản 4. ngoại trừ trường hợp họp khẩn cấp, tất cả tài liệu đề nghị xem xét phải được gửi tới thành viên hội đồng đạo đức được phân công nhận xét trước ít nhất 05 ngày làm việc so với ngày yêu cầu gửi lại phiếu nhận xét, đánh giá nghiên cứu. </code> | <code>Điều 10. Nội dung tuần tra canh gác cống qua đê. khoản 2. người tuần tra, canh gác phải kiểm tra kỹ phần tiếp giáp giữa thân cống, tường cánh gà của cống với đê; cánh cống, bộ phận đóng mở cánh cống, cửa cống, thân cống và khu vực thượng, hạ lưu cống để phát hiện kịp thời những sự cố xảy ra. </code> |
| <code>Hồ sơ cấp lại Giấy chứng nhận đủ điều kiện hoạt động dịch vụ giám định công nghệ bao gồm những giấy tờ gì?</code> | <code>Điều 38. Hồ sơ cấp Giấy chứng nhận đủ điều kiện hoạt động dịch vụ giám định công nghệ. điểm e) mẫu chứng thư giám định của tổ chức. </code> | <code>Điều 6. Trang bị dụng cụ, sổ sách. khoản 7. việc giao nhận các dụng cụ và sổ sách trên đây phải được lập biên bản để quản lý, theo dõi.</code> |
| <code>Chạy quá tốc độ bao nhiêu km thì xe ô tô sẽ bị giam bằng?</code> | <code>Điều 55. Xử phạt các hành vi vi phạm quy định quản lý, bảo trì kết cấu hạ tầng đường sắt. điểm b) thực hiện hành vi quy định tại điểm c khoản 3 điều này buộc phải tổ chức sửa chữa, bổ sung, gia cố, thay thế các hư hỏng kết cấu hạ tầng đường sắt để bảo đảm chất lượng theo công lệnh tốc độ, công lệnh tải trọng đã công bố.</code> | <code>Điều 9. Nội dung tuần tra, canh gác đê. điểm d) mỗi kíp tuần tra phải kiểm tra vượt quá phạm vi phụ trách về hai phía, mỗi phía 50m. đối với những khu vực đã từng xảy ra sự cố hư hỏng, phải kiểm tra quan sát rộng hơn để phát hiện sự cố. </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 2
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | zalo_legal_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:--------------------------:|
| 0.3084 | 2000 | 0.2978 | 0.0778 | 0.9996 |
| 0.6167 | 4000 | 0.1735 | 0.0522 | 1.0 |
| 0.9251 | 6000 | 0.1148 | 0.0330 | 1.0 |
| 1.0 | 6486 | - | - | 1.0 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
fizziehaq/q_learn-taxi-v3 | fizziehaq | 2025-05-21T16:16:07Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-21T16:16:04Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_learn-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fizziehaq/q_learn-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tomaarsen/splade-distilbert-base-uncased-nq-updated-sparsity | tomaarsen | 2025-05-21T16:15:28Z | 0 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"distilbert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:99000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-21T16:15:01Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:99000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: distilbert/distilbert-base-uncased
widget:
- text: Rollin' (Limp Bizkit song) The music video was filmed atop the South Tower
of the former World Trade Center in New York City. The introduction features Ben
Stiller and Stephen Dorff mistaking Fred Durst for the valet and giving him the
keys to their Bentley Azure. Also making a cameo is break dancer Mr. Wiggles.
The rest of the video has several cuts to Durst and his bandmates hanging out
of the Bentley as they drive about Manhattan. The song Ben Stiller is playing
at the beginning is "My Generation" from the same album. The video also features
scenes of Fred Durst with five girls dancing in a room. The video was filmed around
the same time as the film Zoolander, which explains Stiller and Dorff's appearance.
Fred Durst has a small cameo in that film.
- text: 'Maze Runner: The Death Cure On April 22, 2017, the studio delayed the release
date once again, to February 9, 2018, in order to allow more time for post-production;
months later, on August 25, the studio moved the release forward two weeks.[17]
The film will premiere on January 26, 2018 in 3D, IMAX and IMAX 3D.[18][19]'
- text: who played the dj in the movie the warriors
- text: Lionel Messi Born and raised in central Argentina, Messi was diagnosed with
a growth hormone deficiency as a child. At age 13, he relocated to Spain to join
Barcelona, who agreed to pay for his medical treatment. After a fast progression
through Barcelona's youth academy, Messi made his competitive debut aged 17 in
October 2004. Despite being injury-prone during his early career, he established
himself as an integral player for the club within the next three years, finishing
2007 as a finalist for both the Ballon d'Or and FIFA World Player of the Year
award, a feat he repeated the following year. His first uninterrupted campaign
came in the 2008–09 season, during which he helped Barcelona achieve the first
treble in Spanish football. At 22 years old, Messi won the Ballon d'Or and FIFA
World Player of the Year award by record voting margins.
- text: 'Send In the Clowns "Send In the Clowns" is a song written by Stephen Sondheim
for the 1973 musical A Little Night Music, an adaptation of Ingmar Bergman''s
film Smiles of a Summer Night. It is a ballad from Act Two, in which the character
Desirée reflects on the ironies and disappointments of her life. Among other things,
she looks back on an affair years earlier with the lawyer Fredrik, who was deeply
in love with her but whose marriage proposals she had rejected. Meeting him after
so long, she realizes she is in love with him and finally ready to marry him,
but now it is he who rejects her: he is in an unconsummated marriage with a much
younger woman. Desirée proposes marriage to rescue him from this situation, but
he declines, citing his dedication to his bride. Reacting to his rejection, Desirée
sings this song. The song is later reprised as a coda after Fredrik''s young wife
runs away with his son, and Fredrik is finally free to accept Desirée''s offer.[1]'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 32.40901449048007
energy_consumed: 0.08337753469362151
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.285
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-distilbert-base-uncased trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.28
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.52
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.28
name: Dot Precision@1
- type: dot_precision@3
value: 0.1733333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.12000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.28
name: Dot Recall@1
- type: dot_recall@3
value: 0.52
name: Dot Recall@3
- type: dot_recall@5
value: 0.6
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4954197868237354
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.41905555555555546
name: Dot Mrr@10
- type: dot_map@100
value: 0.43020916049077634
name: Dot Map@100
- type: query_active_dims
value: 62.65999984741211
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9979470545885784
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 110.4578628540039
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9963810411226655
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.26
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.5
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.26
name: Dot Precision@1
- type: dot_precision@3
value: 0.16666666666666669
name: Dot Precision@3
- type: dot_precision@5
value: 0.128
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.26
name: Dot Recall@1
- type: dot_recall@3
value: 0.5
name: Dot Recall@3
- type: dot_recall@5
value: 0.64
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4944666703438861
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.41657936507936505
name: Dot Mrr@10
- type: dot_map@100
value: 0.42694690636460897
name: Dot Map@100
- type: query_active_dims
value: 70.22000122070312
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9976993643529027
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 125.49811553955078
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9958882735227197
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.32
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.46
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.32
name: Dot Precision@1
- type: dot_precision@3
value: 0.3133333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.272
name: Dot Precision@5
- type: dot_precision@10
value: 0.22399999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.01892455420216294
name: Dot Recall@1
- type: dot_recall@3
value: 0.04889990251243477
name: Dot Recall@3
- type: dot_recall@5
value: 0.0672946061870769
name: Dot Recall@5
- type: dot_recall@10
value: 0.08887922550901164
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.26311322734975795
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3882460317460318
name: Dot Mrr@10
- type: dot_map@100
value: 0.11155968685488596
name: Dot Map@100
- type: query_active_dims
value: 74.22000122070312
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9975683113419598
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 152.51846313476562
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9950029990454503
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.46
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.48
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.58
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.31999999999999995
name: Dot Precision@3
- type: dot_precision@5
value: 0.264
name: Dot Precision@5
- type: dot_precision@10
value: 0.23200000000000004
name: Dot Precision@10
- type: dot_recall@1
value: 0.01967175630881205
name: Dot Recall@1
- type: dot_recall@3
value: 0.04958955768484856
name: Dot Recall@3
- type: dot_recall@5
value: 0.06588472678704523
name: Dot Recall@5
- type: dot_recall@10
value: 0.08890872761034473
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2726981353115194
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.42394444444444446
name: Dot Mrr@10
- type: dot_map@100
value: 0.11062543949876841
name: Dot Map@100
- type: query_active_dims
value: 85.58000183105469
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971961207708848
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 182.97967529296875
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9940049906528744
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.68
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.2
name: Dot Precision@3
- type: dot_precision@5
value: 0.136
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.34
name: Dot Recall@1
- type: dot_recall@3
value: 0.56
name: Dot Recall@3
- type: dot_recall@5
value: 0.64
name: Dot Recall@5
- type: dot_recall@10
value: 0.66
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5163228308253419
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.48788888888888876
name: Dot Mrr@10
- type: dot_map@100
value: 0.4744598045833104
name: Dot Map@100
- type: query_active_dims
value: 46.939998626708984
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9984620929615783
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 96.43376159667969
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9968405162965507
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.38
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.58
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.68
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.38
name: Dot Precision@1
- type: dot_precision@3
value: 0.19333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.136
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.35
name: Dot Recall@1
- type: dot_recall@3
value: 0.55
name: Dot Recall@3
- type: dot_recall@5
value: 0.65
name: Dot Recall@5
- type: dot_recall@10
value: 0.66
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5211787059288393
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.49649999999999994
name: Dot Mrr@10
- type: dot_map@100
value: 0.48018058391724333
name: Dot Map@100
- type: query_active_dims
value: 55.08000183105469
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9981953999793246
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 114.79106140136719
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9962390714435041
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.32
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.52
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5800000000000001
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.6533333333333333
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.32
name: Dot Precision@1
- type: dot_precision@3
value: 0.22888888888888884
name: Dot Precision@3
- type: dot_precision@5
value: 0.17600000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.12266666666666666
name: Dot Precision@10
- type: dot_recall@1
value: 0.212974851400721
name: Dot Recall@1
- type: dot_recall@3
value: 0.37629996750414496
name: Dot Recall@3
- type: dot_recall@5
value: 0.43576486872902565
name: Dot Recall@5
- type: dot_recall@10
value: 0.4962930751696706
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.42495194833294514
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4317301587301587
name: Dot Mrr@10
- type: dot_map@100
value: 0.33874288397632424
name: Dot Map@100
- type: query_active_dims
value: 61.27333323160807
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9979924862973721
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 119.80336252848308
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9960748521548889
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.4594348508634223
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6475039246467816
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7245525902668759
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7984615384615383
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4594348508634223
name: Dot Precision@1
- type: dot_precision@3
value: 0.29682888540031394
name: Dot Precision@3
- type: dot_precision@5
value: 0.23129042386185245
name: Dot Precision@5
- type: dot_precision@10
value: 0.1639026687598116
name: Dot Precision@10
- type: dot_recall@1
value: 0.2607784592309238
name: Dot Recall@1
- type: dot_recall@3
value: 0.41707679910314266
name: Dot Recall@3
- type: dot_recall@5
value: 0.4815762664814047
name: Dot Recall@5
- type: dot_recall@10
value: 0.5608540436995575
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5109231200022869
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5709977698038922
name: Dot Mrr@10
- type: dot_map@100
value: 0.4325529585492243
name: Dot Map@100
- type: query_active_dims
value: 86.22103676429161
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971751183813548
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 137.4154827411358
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9954978218091496
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: dot_accuracy@1
value: 0.26
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.36
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.48
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.58
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.26
name: Dot Precision@1
- type: dot_precision@3
value: 0.13333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.10800000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.074
name: Dot Precision@10
- type: dot_recall@1
value: 0.12833333333333333
name: Dot Recall@1
- type: dot_recall@3
value: 0.18833333333333332
name: Dot Recall@3
- type: dot_recall@5
value: 0.24666666666666665
name: Dot Recall@5
- type: dot_recall@10
value: 0.30333333333333334
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2564995235608964
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3453015873015872
name: Dot Mrr@10
- type: dot_map@100
value: 0.2062826189577625
name: Dot Map@100
- type: query_active_dims
value: 86.26000213623047
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971738417490259
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0489959716797
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9958046983824231
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: dot_accuracy@1
value: 0.62
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.84
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.92
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.62
name: Dot Precision@1
- type: dot_precision@3
value: 0.54
name: Dot Precision@3
- type: dot_precision@5
value: 0.49200000000000005
name: Dot Precision@5
- type: dot_precision@10
value: 0.43400000000000005
name: Dot Precision@10
- type: dot_recall@1
value: 0.08260659025654458
name: Dot Recall@1
- type: dot_recall@3
value: 0.14565005878146683
name: Dot Recall@3
- type: dot_recall@5
value: 0.1854201572717294
name: Dot Recall@5
- type: dot_recall@10
value: 0.2804326420122478
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.534178112145825
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7352222222222222
name: Dot Mrr@10
- type: dot_map@100
value: 0.41480896090579994
name: Dot Map@100
- type: query_active_dims
value: 55.939998626708984
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9981672236869567
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 125.40165710449219
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9958914338148059
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: dot_accuracy@1
value: 0.64
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.84
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.98
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.64
name: Dot Precision@1
- type: dot_precision@3
value: 0.2866666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.18799999999999997
name: Dot Precision@5
- type: dot_precision@10
value: 0.10399999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.6166666666666667
name: Dot Recall@1
- type: dot_recall@3
value: 0.8166666666666668
name: Dot Recall@3
- type: dot_recall@5
value: 0.8766666666666667
name: Dot Recall@5
- type: dot_recall@10
value: 0.9433333333333332
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7890721601412974
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7516904761904764
name: Dot Mrr@10
- type: dot_map@100
value: 0.7337194522253345
name: Dot Map@100
- type: query_active_dims
value: 84.86000061035156
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9972197103528487
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 142.34327697753906
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9953363712411526
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.52
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.66
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.16666666666666669
name: Dot Precision@3
- type: dot_precision@5
value: 0.128
name: Dot Precision@5
- type: dot_precision@10
value: 0.09199999999999997
name: Dot Precision@10
- type: dot_recall@1
value: 0.13585714285714284
name: Dot Recall@1
- type: dot_recall@3
value: 0.28919047619047616
name: Dot Recall@3
- type: dot_recall@5
value: 0.33274603174603173
name: Dot Recall@5
- type: dot_recall@10
value: 0.4233015873015873
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.32546154855128656
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3704365079365079
name: Dot Mrr@10
- type: dot_map@100
value: 0.26492479686181064
name: Dot Map@100
- type: query_active_dims
value: 65.44000244140625
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9978559726609854
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 132.02734375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9956743547686915
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: dot_accuracy@1
value: 0.74
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.94
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.96
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.74
name: Dot Precision@1
- type: dot_precision@3
value: 0.41999999999999993
name: Dot Precision@3
- type: dot_precision@5
value: 0.27599999999999997
name: Dot Precision@5
- type: dot_precision@10
value: 0.156
name: Dot Precision@10
- type: dot_recall@1
value: 0.37
name: Dot Recall@1
- type: dot_recall@3
value: 0.63
name: Dot Recall@3
- type: dot_recall@5
value: 0.69
name: Dot Recall@5
- type: dot_recall@10
value: 0.78
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7118024522387334
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8268888888888888
name: Dot Mrr@10
- type: dot_map@100
value: 0.6307915421731377
name: Dot Map@100
- type: query_active_dims
value: 81.9000015258789
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973166895509509
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 142.991943359375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9953151188205434
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: dot_accuracy@1
value: 0.86
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.94
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.98
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.86
name: Dot Precision@1
- type: dot_precision@3
value: 0.3666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.24799999999999997
name: Dot Precision@5
- type: dot_precision@10
value: 0.132
name: Dot Precision@10
- type: dot_recall@1
value: 0.7706666666666666
name: Dot Recall@1
- type: dot_recall@3
value: 0.8846666666666667
name: Dot Recall@3
- type: dot_recall@5
value: 0.9359999999999999
name: Dot Recall@5
- type: dot_recall@10
value: 0.9733333333333333
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.909591417031897
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.904190476190476
name: Dot Mrr@10
- type: dot_map@100
value: 0.8825369408369409
name: Dot Map@100
- type: query_active_dims
value: 58.36000061035156
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9980879365503456
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.70333099365234
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9978801084138113
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.28
name: Dot Precision@3
- type: dot_precision@5
value: 0.236
name: Dot Precision@5
- type: dot_precision@10
value: 0.16799999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.086
name: Dot Recall@1
- type: dot_recall@3
value: 0.17566666666666667
name: Dot Recall@3
- type: dot_recall@5
value: 0.24466666666666664
name: Dot Recall@5
- type: dot_recall@10
value: 0.3446666666666666
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3317925768694159
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.533222222222222
name: Dot Mrr@10
- type: dot_map@100
value: 0.25209583120462153
name: Dot Map@100
- type: query_active_dims
value: 134.1999969482422
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9956031715828503
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 164.88478088378906
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9945978382516287
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: dot_accuracy@1
value: 0.1
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.52
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.62
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.1
name: Dot Precision@1
- type: dot_precision@3
value: 0.1733333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.12400000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.08
name: Dot Precision@10
- type: dot_recall@1
value: 0.1
name: Dot Recall@1
- type: dot_recall@3
value: 0.52
name: Dot Recall@3
- type: dot_recall@5
value: 0.62
name: Dot Recall@5
- type: dot_recall@10
value: 0.8
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.44172833183312293
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.32852380952380955
name: Dot Mrr@10
- type: dot_map@100
value: 0.3339302930314127
name: Dot Map@100
- type: query_active_dims
value: 152.0399932861328
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9950186752740275
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 149.56478881835938
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9950997710235777
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: dot_accuracy@1
value: 0.46
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.56
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.68
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.46
name: Dot Precision@1
- type: dot_precision@3
value: 0.20666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.14
name: Dot Precision@5
- type: dot_precision@10
value: 0.07800000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.425
name: Dot Recall@1
- type: dot_recall@3
value: 0.545
name: Dot Recall@3
- type: dot_recall@5
value: 0.59
name: Dot Recall@5
- type: dot_recall@10
value: 0.67
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5519450641329704
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5232698412698412
name: Dot Mrr@10
- type: dot_map@100
value: 0.5187507919958133
name: Dot Map@100
- type: query_active_dims
value: 138.32000732421875
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9954681866416284
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 166.03871154785156
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9945600317296425
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: dot_accuracy@1
value: 0.6326530612244898
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.8775510204081632
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9591836734693877
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.6326530612244898
name: Dot Precision@1
- type: dot_precision@3
value: 0.6054421768707483
name: Dot Precision@3
- type: dot_precision@5
value: 0.5387755102040817
name: Dot Precision@5
- type: dot_precision@10
value: 0.43673469387755104
name: Dot Precision@10
- type: dot_recall@1
value: 0.04531781391284345
name: Dot Recall@1
- type: dot_recall@3
value: 0.12723496235073023
name: Dot Recall@3
- type: dot_recall@5
value: 0.18244054845345592
name: Dot Recall@5
- type: dot_recall@10
value: 0.2837929445033988
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5015858619400403
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.767201166180758
name: Dot Mrr@10
- type: dot_map@100
value: 0.3675943031666616
name: Dot Map@100
- type: query_active_dims
value: 52.67346954345703
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9982742458048799
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 147.12759399414062
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9951796214535699
name: Corpus Sparsity Ratio
---
# splade-distilbert-base-uncased trained on Natural Questions
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-distilbert-base-uncased-nq-updated-sparsity")
# Run inference
sentences = [
'is send in the clowns from a musical',
'Send In the Clowns "Send In the Clowns" is a song written by Stephen Sondheim for the 1973 musical A Little Night Music, an adaptation of Ingmar Bergman\'s film Smiles of a Summer Night. It is a ballad from Act Two, in which the character Desirée reflects on the ironies and disappointments of her life. Among other things, she looks back on an affair years earlier with the lawyer Fredrik, who was deeply in love with her but whose marriage proposals she had rejected. Meeting him after so long, she realizes she is in love with him and finally ready to marry him, but now it is he who rejects her: he is in an unconsummated marriage with a much younger woman. Desirée proposes marriage to rescue him from this situation, but he declines, citing his dedication to his bride. Reacting to his rejection, Desirée sings this song. The song is later reprised as a coda after Fredrik\'s young wife runs away with his son, and Fredrik is finally free to accept Desirée\'s offer.[1]',
'The Suite Life on Deck The Suite Life on Deck is an American sitcom that aired on Disney Channel from September 26, 2008 to May 6, 2011. It is a sequel/spin-off of the Disney Channel Original Series The Suite Life of Zack & Cody. The series follows twin brothers Zack and Cody Martin and hotel heiress London Tipton in a new setting, the SS Tipton, where they attend classes at "Seven Seas High School" and meet Bailey Pickett while Mr. Moseby manages the ship. The ship travels around the world to nations such as Italy, France, Greece, India, Sweden and the United Kingdom where the characters experience different cultures, adventures, and situations.[1]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 30522)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:----------------------|:------------|:-------------|:-----------|:-----------------|:------------|:-----------|:-------------|:-------------|:-------------------|:------------|:------------|:------------|:---------------|
| dot_accuracy@1 | 0.26 | 0.36 | 0.38 | 0.26 | 0.62 | 0.64 | 0.24 | 0.74 | 0.86 | 0.42 | 0.1 | 0.46 | 0.6327 |
| dot_accuracy@3 | 0.5 | 0.46 | 0.58 | 0.36 | 0.84 | 0.84 | 0.44 | 0.9 | 0.94 | 0.6 | 0.52 | 0.56 | 0.8776 |
| dot_accuracy@5 | 0.64 | 0.48 | 0.68 | 0.48 | 0.9 | 0.9 | 0.52 | 0.94 | 0.98 | 0.72 | 0.62 | 0.6 | 0.9592 |
| dot_accuracy@10 | 0.74 | 0.58 | 0.7 | 0.58 | 0.92 | 0.98 | 0.66 | 0.96 | 1.0 | 0.78 | 0.8 | 0.68 | 1.0 |
| dot_precision@1 | 0.26 | 0.36 | 0.38 | 0.26 | 0.62 | 0.64 | 0.24 | 0.74 | 0.86 | 0.42 | 0.1 | 0.46 | 0.6327 |
| dot_precision@3 | 0.1667 | 0.32 | 0.1933 | 0.1333 | 0.54 | 0.2867 | 0.1667 | 0.42 | 0.3667 | 0.28 | 0.1733 | 0.2067 | 0.6054 |
| dot_precision@5 | 0.128 | 0.264 | 0.136 | 0.108 | 0.492 | 0.188 | 0.128 | 0.276 | 0.248 | 0.236 | 0.124 | 0.14 | 0.5388 |
| dot_precision@10 | 0.074 | 0.232 | 0.07 | 0.074 | 0.434 | 0.104 | 0.092 | 0.156 | 0.132 | 0.168 | 0.08 | 0.078 | 0.4367 |
| dot_recall@1 | 0.26 | 0.0197 | 0.35 | 0.1283 | 0.0826 | 0.6167 | 0.1359 | 0.37 | 0.7707 | 0.086 | 0.1 | 0.425 | 0.0453 |
| dot_recall@3 | 0.5 | 0.0496 | 0.55 | 0.1883 | 0.1457 | 0.8167 | 0.2892 | 0.63 | 0.8847 | 0.1757 | 0.52 | 0.545 | 0.1272 |
| dot_recall@5 | 0.64 | 0.0659 | 0.65 | 0.2467 | 0.1854 | 0.8767 | 0.3327 | 0.69 | 0.936 | 0.2447 | 0.62 | 0.59 | 0.1824 |
| dot_recall@10 | 0.74 | 0.0889 | 0.66 | 0.3033 | 0.2804 | 0.9433 | 0.4233 | 0.78 | 0.9733 | 0.3447 | 0.8 | 0.67 | 0.2838 |
| **dot_ndcg@10** | **0.4945** | **0.2727** | **0.5212** | **0.2565** | **0.5342** | **0.7891** | **0.3255** | **0.7118** | **0.9096** | **0.3318** | **0.4417** | **0.5519** | **0.5016** |
| dot_mrr@10 | 0.4166 | 0.4239 | 0.4965 | 0.3453 | 0.7352 | 0.7517 | 0.3704 | 0.8269 | 0.9042 | 0.5332 | 0.3285 | 0.5233 | 0.7672 |
| dot_map@100 | 0.4269 | 0.1106 | 0.4802 | 0.2063 | 0.4148 | 0.7337 | 0.2649 | 0.6308 | 0.8825 | 0.2521 | 0.3339 | 0.5188 | 0.3676 |
| query_active_dims | 70.22 | 85.58 | 55.08 | 86.26 | 55.94 | 84.86 | 65.44 | 81.9 | 58.36 | 134.2 | 152.04 | 138.32 | 52.6735 |
| query_sparsity_ratio | 0.9977 | 0.9972 | 0.9982 | 0.9972 | 0.9982 | 0.9972 | 0.9979 | 0.9973 | 0.9981 | 0.9956 | 0.995 | 0.9955 | 0.9983 |
| corpus_active_dims | 125.4981 | 182.9797 | 114.7911 | 128.049 | 125.4017 | 142.3433 | 132.0273 | 142.9919 | 64.7033 | 164.8848 | 149.5648 | 166.0387 | 147.1276 |
| corpus_sparsity_ratio | 0.9959 | 0.994 | 0.9962 | 0.9958 | 0.9959 | 0.9953 | 0.9957 | 0.9953 | 0.9979 | 0.9946 | 0.9951 | 0.9946 | 0.9952 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
]
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.32 |
| dot_accuracy@3 | 0.52 |
| dot_accuracy@5 | 0.58 |
| dot_accuracy@10 | 0.6533 |
| dot_precision@1 | 0.32 |
| dot_precision@3 | 0.2289 |
| dot_precision@5 | 0.176 |
| dot_precision@10 | 0.1227 |
| dot_recall@1 | 0.213 |
| dot_recall@3 | 0.3763 |
| dot_recall@5 | 0.4358 |
| dot_recall@10 | 0.4963 |
| **dot_ndcg@10** | **0.425** |
| dot_mrr@10 | 0.4317 |
| dot_map@100 | 0.3387 |
| query_active_dims | 61.2733 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 119.8034 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4594 |
| dot_accuracy@3 | 0.6475 |
| dot_accuracy@5 | 0.7246 |
| dot_accuracy@10 | 0.7985 |
| dot_precision@1 | 0.4594 |
| dot_precision@3 | 0.2968 |
| dot_precision@5 | 0.2313 |
| dot_precision@10 | 0.1639 |
| dot_recall@1 | 0.2608 |
| dot_recall@3 | 0.4171 |
| dot_recall@5 | 0.4816 |
| dot_recall@10 | 0.5609 |
| **dot_ndcg@10** | **0.5109** |
| dot_mrr@10 | 0.571 |
| dot_map@100 | 0.4326 |
| query_active_dims | 86.221 |
| query_sparsity_ratio | 0.9972 |
| corpus_active_dims | 137.4155 |
| corpus_sparsity_ratio | 0.9955 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|
| 0.0242 | 200 | 4.7655 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0485 | 400 | 0.168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0727 | 600 | 0.0672 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0970 | 800 | 0.0533 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1212 | 1000 | 0.0605 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1455 | 1200 | 0.051 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1697 | 1400 | 0.0244 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1939 | 1600 | 0.0306 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2 | 1650 | - | 0.0220 | 0.4946 | 0.2654 | 0.4801 | 0.4134 | - | - | - | - | - | - | - | - | - | - |
| 0.2182 | 1800 | 0.0246 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2424 | 2000 | 0.0445 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2667 | 2200 | 0.0322 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2909 | 2400 | 0.0316 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3152 | 2600 | 0.029 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3394 | 2800 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3636 | 3000 | 0.0312 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3879 | 3200 | 0.0232 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4 | 3300 | - | 0.0155 | 0.4420 | 0.2753 | 0.5112 | 0.4095 | - | - | - | - | - | - | - | - | - | - |
| 0.4121 | 3400 | 0.0245 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4364 | 3600 | 0.0233 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4606 | 3800 | 0.0224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4848 | 4000 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5091 | 4200 | 0.0269 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5333 | 4400 | 0.0245 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5576 | 4600 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5818 | 4800 | 0.0276 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6 | 4950 | - | 0.0098 | 0.4901 | 0.2460 | 0.5124 | 0.4162 | - | - | - | - | - | - | - | - | - | - |
| 0.6061 | 5000 | 0.0193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6303 | 5200 | 0.0223 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6545 | 5400 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6788 | 5600 | 0.0254 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7030 | 5800 | 0.0197 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7273 | 6000 | 0.0271 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7515 | 6200 | 0.02 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7758 | 6400 | 0.0088 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.8** | **6600** | **0.0125** | **0.0233** | **0.4945** | **0.2727** | **0.5212** | **0.4294** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.8242 | 6800 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8485 | 7000 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8727 | 7200 | 0.0192 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8970 | 7400 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9212 | 7600 | 0.0086 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9455 | 7800 | 0.0205 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9697 | 8000 | 0.0267 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9939 | 8200 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 8250 | - | 0.0174 | 0.4954 | 0.2631 | 0.5163 | 0.4250 | - | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | - | 0.4945 | 0.2727 | 0.5212 | 0.5109 | 0.2565 | 0.5342 | 0.7891 | 0.3255 | 0.7118 | 0.9096 | 0.3318 | 0.4417 | 0.5519 | 0.5016 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.083 kWh
- **Carbon Emitted**: 0.032 kg of CO2
- **Hours Used**: 0.285 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
unsloth/Devstral-Small-2505-bnb-4bit | unsloth | 2025-05-21T16:15:22Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"text2text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Devstral-Small-2505",
"base_model:quantized:mistralai/Devstral-Small-2505",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | text2text-generation | 2025-05-21T16:14:05Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Devstral-Small-2505
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text2text-generation
---
# Model Card for mistralai/Devstrall-Small-2505
Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results).
It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed.
For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral).
## Key Features:
- **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
- **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
- **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window**: A 128k context window.
- **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
### SWE-Bench
Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%.
| Model | Scaffold | SWE-Bench Verified (%) |
|------------------|--------------------|------------------------|
| Devstral | OpenHands Scaffold | **46.8** |
| GPT-4.1-mini | OpenAI Scaffold | 23.6 |
| Claude 3.5 Haiku | Anthropic Scaffold | 40.6 |
| SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 |
When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.

## Usage
We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold.
You can use it either through our API or by running locally.
### API
Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key.
Then run these commands to start the OpenHands docker container.
```bash
export MISTRAL_API_KEY=<MY_KEY>
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik
mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.39
```
### Local inference
You can also run the model locally. It can be done with LMStudio or other providers listed below.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
The server will start at http://0.0.0.0:3000. Open it in your browser and you will see a tab AI Provider Configuration.
Now you can start a new conversation with the agent by clicking on the plus sign on the left bar.
The model can also be deployed with the following libraries:
- [`LMStudio (recommended for quantized model)`](https://lmstudio.ai/): See [here](#lmstudio-recommended-for-quantized-model)
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
- [`ollama`](https://github.com/ollama/ollama): See [here](#ollama)
### OpenHands (recommended)
#### Launch a server to deploy Devstral-Small-2505
Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`.
In the case of the tutorial we spineed up a vLLM server running the command:
```bash
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
The server address should be in the following format: `http://<your-server-url>:8000/v1`
#### Launch OpenHands
You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation).
The easiest way to launch OpenHands is to use the Docker image:
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Then, you can access the OpenHands UI at `http://localhost:3000`.
#### Connect to the server
When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.
Fill the following fields:
- **Custom Model**: `openai/mistralai/Devstral-Small-2505`
- **Base URL**: `http://<your-server-url>:8000/v1`
- **API Key**: `token` (or any other token you used to launch the server if any)
#### Use OpenHands powered by Devstral
Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app.
<details>
<summary>To-Do list app</summary
1. Let's ask Devstral to generate the app with the following prompt:
```txt
Build a To-Do list app with the following requirements:
- Built using FastAPI and React.
- Make it a one page app that:
- Allows to add a task.
- Allows to delete a task.
- Allows to mark a task as done.
- Displays the list of tasks.
- Store the tasks in a SQLite database.
```

2. Let's see the result
You should see the agent construct the app and be able to explore the code it generated.
If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app.


3. Iterate
Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status.
Enjoy building with Devstral Small and OpenHands!
</details>
### LMStudio (recommended for quantized model)
Download the weights from huggingface:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2505_gguf" \
--include "devstralQ4_K_M.gguf" \
--local-dir "mistralai/Devstral-Small-2505_gguf/"
```
You can serve the model locally with [LMStudio](https://lmstudio.ai/).
* Download [LM Studio](https://lmstudio.ai/) and install it
* Install `lms cli ~/.lmstudio/bin/lms bootstrap`
* In a bash terminal, run `lms import devstralQ4_K_M.ggu` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`)
* Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting oggle Serve on Local Network to be on.
* On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step.
Launch Openhands
You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker
```bash
docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik
docker run -it --rm --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \
-e LOG_ALL_EVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands-state:/.openhands-state \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.38
```
Click “see advanced setting” on the second line.
In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes.
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Devstral in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from huggingface_hub import hf_hub_download
url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Devstral-Small-2505"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "<your-command>",
},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
```
### Mistral-inference
We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.
#### Install
Make sure to have mistral_inference >= 1.6.0 installed.
```bash
pip install mistral_inference --upgrade
```
#### Download
```python
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
#### Python
You can run the model using the following command:
```bash
mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300
```
You can then prompt it with anything you'd like.
### Ollama
You can run Devstral using the [Ollama](https://ollama.ai/) CLI.
```bash
ollama run devstral
```
### Transformers
To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
```python
import torch
from mistral_common.protocol.instruct.messages import (
SystemMessage, UserMessage
)
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
from huggingface_hub import hf_hub_download
from transformers import AutoModelForCausalLM
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Devstral-Small-2505"
tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json")
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_file(tekken_file)
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenized = tokenizer.encode_chat_completion(
ChatCompletionRequest(
messages=[
SystemMessage(content=SYSTEM_PROMPT),
UserMessage(content="<your-command>"),
],
)
)
output = model.generate(
input_ids=torch.tensor([tokenized.tokens]),
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)
``` |
shah-sapna-video-link-4k/hot.video.redeem.craze.com.shah.sapna.viral.video.starcaptions.com.apk8d.redeem.craze.link | shah-sapna-video-link-4k | 2025-05-21T16:15:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-21T16:14:19Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
fizziehaq/q-FrozenLake-v1-4x4-noSlippery | fizziehaq | 2025-05-21T16:14:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-21T16:14:31Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fizziehaq/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
filipesantoscv11/26338804-0cf0-467b-961e-d0e1f40726b1 | filipesantoscv11 | 2025-05-21T16:14:24Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:quantized:unsloth/Phi-3.5-mini-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T15:23:03Z | ---
base_model: unsloth/Phi-3.5-mini-instruct
library_name: transformers
model_name: 26338804-0cf0-467b-961e-d0e1f40726b1
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 26338804-0cf0-467b-961e-d0e1f40726b1
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="filipesantoscv11/26338804-0cf0-467b-961e-d0e1f40726b1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/z1evedif)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
thliang01/nanoVLM_v0 | thliang01 | 2025-05-21T16:13:54Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-21T16:12:38Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("thliang01/nanoVLM_v0")
```
|
Subsets and Splits