modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-25 00:44:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-25 00:44:09
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MrRobotoAI/X2 | MrRobotoAI | 2025-05-24T12:21:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:MrRobotoAI/Huldra-R1-v2.2-8b-DEEPSEEK",
"base_model:merge:MrRobotoAI/Huldra-R1-v2.2-8b-DEEPSEEK",
"base_model:MrRobotoAI/X1",
"base_model:merge:MrRobotoAI/X1",
"base_model:Samhita-kolluri/mistral-paper-critique-lora",
"base_model:merge:Samhita-kolluri/mistral-paper-critique-lora",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T12:19:04Z | ---
base_model:
- MrRobotoAI/Huldra-R1-v2.2-8b-DEEPSEEK
- MrRobotoAI/X1
- Samhita-kolluri/mistral-paper-critique-lora
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/Huldra-R1-v2.2-8b-DEEPSEEK](https://huggingface.co/MrRobotoAI/Huldra-R1-v2.2-8b-DEEPSEEK)
* [MrRobotoAI/X1](https://huggingface.co/MrRobotoAI/X1) + [Samhita-kolluri/mistral-paper-critique-lora](https://huggingface.co/Samhita-kolluri/mistral-paper-critique-lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/X1+Samhita-kolluri/mistral-paper-critique-lora
- model: MrRobotoAI/Huldra-R1-v2.2-8b-DEEPSEEK
parameters:
weight: 1.0
merge_method: linear
normalize: true
dtype: float16
```
|
DaniloNeto/prune_llama | DaniloNeto | 2025-05-24T12:20:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-05-24T12:18:16Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DaniloNeto
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SinaRp/Qwen2-0.5B-GRPO-test | SinaRp | 2025-05-24T12:19:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T10:30:51Z | ---
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SinaRp/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BienKieu/codet5-phase2-v3 | BienKieu | 2025-05-24T12:17:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-24T12:17:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kanishka/opt-babylm2-clean-spacy-earlystop-bpe_seed-211_1e-3 | kanishka | 2025-05-24T12:16:50Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/babylm2-clean-spacy",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-04T14:11:38Z | ---
library_name: transformers
tags:
- generated_from_trainer
datasets:
- kanishka/babylm2-clean-spacy
metrics:
- accuracy
model-index:
- name: opt-babylm2-clean-spacy-earlystop-bpe_seed-211_1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/babylm2-clean-spacy
type: kanishka/babylm2-clean-spacy
metrics:
- name: Accuracy
type: accuracy
value: 0.4792357308208087
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-babylm2-clean-spacy-earlystop-bpe_seed-211_1e-3
This model was trained from scratch on the kanishka/babylm2-clean-spacy dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6756
- Accuracy: 0.4792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 4.0925 | 1.0 | 2264 | 3.8114 | 0.3607 |
| 3.4547 | 2.0 | 4528 | 3.3029 | 0.4088 |
| 3.1308 | 3.0 | 6792 | 3.0892 | 0.4297 |
| 2.9202 | 4.0 | 9056 | 2.9797 | 0.4409 |
| 2.838 | 5.0 | 11320 | 2.9200 | 0.4472 |
| 2.7829 | 6.0 | 13584 | 2.8833 | 0.4511 |
| 2.7376 | 7.0 | 15848 | 2.8527 | 0.4547 |
| 2.707 | 8.0 | 18112 | 2.8312 | 0.4571 |
| 2.6841 | 9.0 | 20376 | 2.8202 | 0.4586 |
| 2.6621 | 10.0 | 22640 | 2.8073 | 0.4599 |
| 2.6417 | 11.0 | 24904 | 2.7996 | 0.4605 |
| 2.6427 | 12.0 | 27168 | 2.7924 | 0.4616 |
| 2.6312 | 13.0 | 29432 | 2.7864 | 0.4622 |
| 2.6218 | 14.0 | 31696 | 2.7867 | 0.4624 |
| 2.6024 | 15.0 | 33960 | 2.7614 | 0.4656 |
| 2.5607 | 16.0 | 36224 | 2.7356 | 0.4692 |
| 2.5117 | 17.0 | 38488 | 2.7129 | 0.4720 |
| 2.4557 | 18.0 | 40752 | 2.6935 | 0.4752 |
| 2.3912 | 19.0 | 43016 | 2.6783 | 0.4777 |
| 2.3218 | 19.9914 | 45260 | 2.6756 | 0.4792 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1
|
xlight05/lora_model | xlight05 | 2025-05-24T12:09:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T12:09:39Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-tutorial-Tamil-Aunty-Viral-Video/FULL.VIDEO.LINK.Tamil.Aunty.Viral.Video.Leaks.Official | New-tutorial-Tamil-Aunty-Viral-Video | 2025-05-24T12:08:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T12:06:05Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
VIRAL-VIDEO-Katrina-Lim-Kiffy-LINK/CLIPS.Katrina.Lim.Kiffy.VIDEO.LINK | VIRAL-VIDEO-Katrina-Lim-Kiffy-LINK | 2025-05-24T12:07:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T12:06:41Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Video 18+ katrina lim kiffy katrinalim123 katrina lim tg telegram clip link
Full Video 18++ katrina lim kiffy katrinalim123 katrina lim tg telegram clip link
Full Video 18+ katrina lim kiffy katrinalim123 katrina lim tg telegram clip
Full Video 18+ katrina lim kiffy katrinalim123 katrina lim tg telegram |
bodam/sft-dpo-llama3.2-1b | bodam | 2025-05-24T12:05:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T12:05:13Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matyaydin/rag_example | matyaydin | 2025-05-24T12:04:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T08:55:39Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: rag_example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rag_example
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
|
duydc/qwen-2.5-7b-alpaca-instruct-2452025-ver7 | duydc | 2025-05-24T12:03:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T12:01:49Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen-2.5-7b-alpaca-instruct-2452025-ver7
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-2.5-7b-alpaca-instruct-2452025-ver7
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="duydc/qwen-2.5-7b-alpaca-instruct-2452025-ver7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/duydc/huggingface/runs/hkb5p7cl)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hitty28/MedQwen-1.5B-Instruct-v2 | hitty28 | 2025-05-24T12:00:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
] | null | 2025-05-24T12:00:42Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
haihp02/5f0b3664-21f9-4b51-a76d-d8371a3f095d-phase2-adapter | haihp02 | 2025-05-24T11:55:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T11:54:29Z | ---
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
model_name: 5f0b3664-21f9-4b51-a76d-d8371a3f095d-phase2-adapter
tags:
- generated_from_trainer
- trl
- sft
- dpo
licence: license
---
# Model Card for 5f0b3664-21f9-4b51-a76d-d8371a3f095d-phase2-adapter
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haihp02/5f0b3664-21f9-4b51-a76d-d8371a3f095d-phase2-adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/sn56-dpo-train/runs/5w9vqcd1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
VIRAL-VIDEO-Katrina-Lim-Kiffy-LINK/VIRAL.Katrina.Lim.Kiffy.CLIP.VIDEO.LINK | VIRAL-VIDEO-Katrina-Lim-Kiffy-LINK | 2025-05-24T11:55:06Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T11:53:24Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
FormlessAI/a6417ccf-4950-49c4-9186-660ff0e7a007 | FormlessAI | 2025-05-24T11:53:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T11:24:26Z | ---
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
model_name: a6417ccf-4950-49c4-9186-660ff0e7a007
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for a6417ccf-4950-49c4-9186-660ff0e7a007
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/a6417ccf-4950-49c4-9186-660ff0e7a007", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/oq5fqho0)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AlekseyCalvin/SkyreelsV2_1.3B_Diffusers | AlekseyCalvin | 2025-05-24T11:47:42Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"diffusers:WanPipeline",
"region:us"
] | null | 2025-05-24T11:26:06Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WenFengg/securityO1_w6_k4 | WenFengg | 2025-05-24T11:45:53Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T11:30:40Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Chien0405/distillation_train_cn_clip | Chien0405 | 2025-05-24T11:44:04Z | 0 | 0 | null | [
"arxiv:2211.01335",
"region:us"
] | null | 2025-05-24T11:32:34Z | # Chinese-CLIP 蒸馏模型训练项目
基于Chinese-CLIP的知识蒸馏研究项目,包含完整的训练、评估和零样本分类实验。
## 📁 仓库结构
```
├── models/ # 蒸馏模型
│ ├── team/ # TEAM蒸馏模型
│ ├── large/ # Large蒸馏模型
│ ├── huge/ # Huge蒸馏模型
│ └── baseline/ # A800基准模型
├── training_logs/ # 训练日志
├── evaluation_results/ # 评估结果
└── README.md # 项目文档
```
## 🎯 项目概述
本项目基于Chinese-CLIP进行知识蒸馏研究,包含:
1. **模型蒸馏**: 使用TEAM、Large、Huge等不同方法进行知识蒸馏
2. **模型评估**: 在Flickr30k-CN等数据集上进行图文检索评估
3. **零样本分类**: 在ELEVATER数据集上进行零样本图像分类测试
## 🚀 快速开始
### 环境准备
```bash
# 安装依赖
pip install cn_clip torch torchvision
# 下载模型
git lfs pull
```
### 使用模型
```python
import torch
from cn_clip.clip import load_from_name
# 加载蒸馏模型
model, preprocess = load_from_name("ViT-B-16", device="cuda")
checkpoint = torch.load("models/team/epoch_latest.pt", map_location="cuda")
model.load_state_dict(checkpoint['state_dict'])
```
## 📊 模型性能
详细的评估结果请查看 `evaluation_results/` 目录。
## 🔗 相关链接
- [Chinese-CLIP 原始项目](https://github.com/OFA-Sys/Chinese-CLIP)
- [训练数据集 MUGE](https://tianchi.aliyun.com/dataset/dataDetail?dataId=107090)
- [评估数据集 Flickr30k-CN](https://github.com/li-xirong/cross-lingual-cap)
## 📄 引用
如果您使用了本项目的模型,请引用:
```bibtex
@article{chinese-clip,
title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese},
author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang},
journal={arXiv preprint arXiv:2211.01335},
year={2022}
}
```
## 📮 联系方式
如有问题,请提交Issue或联系项目维护者。
|
Harry2166/fine-tuned-climate-bert | Harry2166 | 2025-05-24T11:41:43Z | 52 | 0 | null | [
"safetensors",
"distilbert",
"en",
"dataset:climatebert/climate_sentiment",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2025-05-15T13:04:25Z | ---
datasets:
- climatebert/climate_sentiment
language:
- en
base_model:
- distilbert/distilbert-base-uncased
--- |
sanchit42/instruct-4-aug | sanchit42 | 2025-05-24T11:36:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T11:32:38Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Berkayy4/gemma3_small_ds | Berkayy4 | 2025-05-24T11:36:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T11:20:46Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma3_small_ds
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma3_small_ds
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Berkayy4/gemma3_small_ds", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DAKARA555/grinding_pussyjob_e130 | DAKARA555 | 2025-05-24T11:34:38Z | 53 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-05-06T18:00:20Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/IMG_9514.PNG
base_model: Wan-AI/Wan2.1-I2V-14B-480P
instance_prompt: null
license: apache-2.0
---
# test4
<Gallery />
## Model description
https://civitai.com/models/1476907/grinding
https://huggingface.co/DAKARA555/grinding_pussyjob_e130/resolve/main/grinding_pussyjob_e130.safetensors?download=true
## Download model
Weights for this model are available in Safetensors format.
[Download](/DAKARA555/test4/tree/main) them in the Files & versions tab.
|
nell123/phi3-avg | nell123 | 2025-05-24T11:34:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2212.04089",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:merge:microsoft/Phi-3-mini-128k-instruct",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:merge:microsoft/Phi-3-mini-4k-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T11:33:14Z | ---
base_model:
- microsoft/Phi-3-mini-128k-instruct
- microsoft/Phi-3-mini-4k-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# output-model-directory
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a base.
### Models Merged
The following models were included in the merge:
* [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: microsoft/Phi-3-mini-4k-instruct
parameters:
weight: 0.8
- model: microsoft/Phi-3-mini-128k-instruct
parameters:
weight: 0.2
base_model: microsoft/Phi-3-mini-4k-instruct
merge_method: task_arithmetic
dtype: float16
```
|
lululuaaaaa/aicrowd-lyy-base-llm-v4-grpo-v2 | lululuaaaaa | 2025-05-24T11:33:49Z | 0 | 0 | null | [
"safetensors",
"mllama",
"license:apache-2.0",
"region:us"
] | null | 2025-05-24T11:14:13Z | ---
license: apache-2.0
---
|
newshinsei/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose | newshinsei | 2025-05-24T11:32:47Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pesty howling moose",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-16T12:24:57Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pesty howling moose
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="newshinsei/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jonathantiedchen/Mistral-7B-CPT-CL | jonathantiedchen | 2025-05-24T11:31:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T11:22:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WalidBouss/Qwen2.5-vl-3b-Instruct-ModdedTokenizer | WalidBouss | 2025-05-24T11:28:45Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-23T15:39:40Z |
---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
---
# Qwen2.5-VL-3B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
In the past five months since Qwen2-VL’s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.
#### Key Enhancements:
* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.
* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.
* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.
* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.
* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.
#### Model Architecture Updates:
* **Dynamic Resolution and Frame Rate Training for Video Understanding**:
We extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL/qwen2.5vl_arc.jpeg" width="80%"/>
<p>
* **Streamlined and Efficient Vision Encoder**
We enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.
We have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 3B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).
## Evaluation
### Image benchmark
| Benchmark | InternVL2.5-4B |Qwen2-VL-7B |Qwen2.5-VL-3B |
| :--- | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 52.3 | 54.1 | 53.1|
| MMMU-Pro<sub>val</sub> | **32.7** | 30.5 | 31.6|
| AI2D<sub>test</sub> | 81.4 | **83.0** | 81.5 |
| DocVQA<sub>test</sub> | 91.6 | 94.5 | **93.9** |
| InfoVQA<sub>test</sub> | 72.1 | 76.5 | **77.1** |
| TextVQA<sub>val</sub> | 76.8 | **84.3** | 79.3|
| MMBench-V1.1<sub>test</sub> | 79.3 | **80.7** | 77.6 |
| MMStar | 58.3 | **60.7** | 55.9 |
| MathVista<sub>testmini</sub> | 60.5 | 58.2 | **62.3** |
| MathVision<sub>full</sub> | 20.9 | 16.3 | **21.2** |
### Video benchmark
| Benchmark | InternVL2.5-4B | Qwen2-VL-7B | Qwen2.5-VL-3B |
| :--- | :---: | :---: | :---: |
| MVBench | 71.6 | 67.0 | 67.0 |
| VideoMME | 63.6/62.3 | 69.0/63.3 | 67.6/61.5 |
| MLVU | 48.3 | - | 68.2 |
| LVBench | - | - | 43.3 |
| MMBench-Video | 1.73 | 1.44 | 1.63 |
| EgoSchema | - | - | 64.8 |
| PerceptionTest | - | - | 66.9 |
| TempCompass | - | - | 64.4 |
| LongVideoBench | 55.2 | 55.6 | 54.2 |
| CharadesSTA/mIoU | - | - | 38.8 |
### Agent benchmark
| Benchmarks | Qwen2.5-VL-3B |
|-------------------------|---------------|
| ScreenSpot | 55.5 |
| ScreenSpot Pro | 23.9 |
| AITZ_EM | 76.9 |
| Android Control High_EM | 63.7 |
| Android Control Low_EM | 22.2 |
| AndroidWorld_SR | 90.8 |
| MobileMiniWob++_SR | 67.9 |
## Requirements
The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:
```
pip install git+https://github.com/huggingface/transformers accelerate
```
or you might encounter the following error:
```
KeyError: 'qwen2_5_vl'
```
## Quickstart
Below, we provide simple examples to show how to use Qwen2.5-VL with 🤖 ModelScope and 🤗 Transformers.
The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:
```
pip install git+https://github.com/huggingface/transformers accelerate
```
or you might encounter the following error:
```
KeyError: 'qwen2_5_vl'
```
We offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
# It's highly recommanded to use `[decord]` feature for faster video loading.
pip install qwen-vl-utils[decord]==0.0.8
```
If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.
### Using 🤗 Transformers to Chat
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2.5-VL-3B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a local video path and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video url and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4",
},
{"type": "text", "text": "Describe this video."},
],
}
]
#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
fps=fps,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
Video URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.
| Backend | HTTP | HTTPS |
|-------------|------|-------|
| torchvision >= 0.19.0 | ✅ | ✅ |
| torchvision < 0.19.0 | ❌ | ❌ |
| decord | ✅ | ❌ |
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages2]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### 🤖 ModelScope
We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```
{
...,
"type": "yarn",
"mrope_section": [
16,
24,
24
],
"factor": 4,
"original_max_position_embeddings": 32768
}
```
However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.
At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5-VL,
title = {Qwen2.5-VL},
url = {https://qwenlm.github.io/blog/qwen2.5-vl/},
author = {Qwen Team},
month = {January},
year = {2025}
}
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
mika5883/gec_t5_dpo_A_v1 | mika5883 | 2025-05-24T11:27:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:mika5883/ft_rugec_A",
"base_model:finetune:mika5883/ft_rugec_A",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-24T11:26:55Z | ---
base_model: mika5883/ft_rugec_A
library_name: transformers
model_name: gec_t5_dpo_A_v1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for gec_t5_dpo_A_v1
This model is a fine-tuned version of [mika5883/ft_rugec_A](https://huggingface.co/mika5883/ft_rugec_A).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mika5883/gec_t5_dpo_A_v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mika5883/huggingface/runs/rqvsjhvc)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.52.3
- Pytorch: 2.5.1
- Datasets: 3.0.1
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
omrisap/TreeRPO_math_straight_2_bf16 | omrisap | 2025-05-24T11:27:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T11:23:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_LoRa_ACSEmployment_2_cfda_ep9_22 | MinaMila | 2025-05-24T11:26:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T11:26:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
othoi-113-video-link/othoiiii.viral.video.link.othoi.viral.video.link.1.13.second | othoi-113-video-link | 2025-05-24T11:25:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T11:23:09Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=othoi-113)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=othoi-113)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=othoi-113) |
khashayardr/yelp_review_classifier | khashayardr | 2025-05-24T11:20:36Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-24T10:25:46Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: yelp_review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classifier
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
savesta222111/saveata | savesta222111 | 2025-05-24T11:20:03Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-generation",
"ar",
"en",
"dataset:nvidia/OpenMathReasoning",
"base_model:deepseek-ai/DeepSeek-Prover-V2-671B",
"base_model:adapter:deepseek-ai/DeepSeek-Prover-V2-671B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-24T11:18:45Z | ---
license: apache-2.0
datasets:
- nvidia/OpenMathReasoning
language:
- ar
- en
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-Prover-V2-671B
new_version: nari-labs/Dia-1.6B
pipeline_tag: text-generation
library_name: adapter-transformers
--- |
lululuaaaaa/aicrowd-base-llm-grpo-byw | lululuaaaaa | 2025-05-24T11:19:24Z | 0 | 0 | null | [
"safetensors",
"mllama",
"license:apache-2.0",
"region:us"
] | null | 2025-05-24T11:03:58Z | ---
license: apache-2.0
---
|
beanne-valerie/beanne.scandal.beanne.valerie.dela.cruz.beanne.valerie.dela.cruz.telegram | beanne-valerie | 2025-05-24T11:18:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T11:15:59Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=beanne-valerie)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=beanne-valerie)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=beanne-valerie) |
venkatasagar/NER-roBERTa-finetuned | venkatasagar | 2025-05-24T11:17:27Z | 0 | 0 | transformers | [
"transformers",
"NER",
"Named-Enity-Recognition",
"roberta",
"token-classification",
"document-understanding",
"resume-parsing",
"cv-parsing",
"custom-ner",
"sliding-window-inference",
"huggingface",
"pytorch",
"spaCy",
"pdf-extraction",
"docx-extraction",
"information-extraction",
"text-mining",
"text-classification",
"job-application-automation",
"structured-data-extraction",
"en",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-24T10:17:59Z | ---
license: mit
language:
- en
metrics:
- precision
- recall
- f1
- accuracy
base_model:
- FacebookAI/xlm-roberta-base
pipeline_tag: text-classification
tags:
- NER
- Named-Enity-Recognition
- roberta
- transformers
- token-classification
- document-understanding
- resume-parsing
- cv-parsing
- custom-ner
- sliding-window-inference
- huggingface
- pytorch
- spaCy
- pdf-extraction
- docx-extraction
- information-extraction
- text-mining
- text-classification
- job-application-automation
- structured-data-extraction
---
# 🚀 NER-RoBERTa: Fine-Tuned Named Entity Recognition Model
A robust Named Entity Recognition (NER) model fine-tuned on custom annotated resume/career-related data using **RoBERTa** architecture. This model is capable of extracting structured information such as personal details, education, work experience, skills, and more from unstructured text, making it highly suitable for resume parsing, HR automation, and document understanding tasks.
---
## 🧠 Model Details
* **Model architecture:** RoBERTa base (`roberta-base`)
* **Task:** Token Classification (NER)
* **Fine-tuned on:** Annotated resume dataset (custom labels)
* **Entity types:**
* `NAME`
* `CONTACT`, `EMAIL`, `LOCATION`
* `LINKEDIN`, `GITHUB`
* `ORG_NAME`, `JOB_TITLE`, `START_DATE`, `END_DATE`
* `DEGREE`, `FIELD_OF_STUDY`, `GRADUATION_YEAR`, `GPA`
* `SKILLS`, `PROJECT_TITLE`, `LANGUAGES`, `OTHER`
---
## 📦 Files Included
* `config.json`
* `pytorch_model.bin` or `model.safetensors`
* `tokenizer_config.json`, `vocab.json`, `tokenizer.json`
* `special_tokens_map.json`
* `merges.txt`
---
## 📊 Example Usage
```python
from transformers import RobertaTokenizerFast, RobertaForTokenClassification
import torch
# Load model and tokenizer
model = RobertaForTokenClassification.from_pretrained("venkatasagar/NER-roBERTa-finetuned")
tokenizer = RobertaTokenizerFast.from_pretrained("venkatasagar/NER-roBERTa-finetuned")
# Sample text
text = "John Doe is a software engineer at Google. He graduated with a B.Tech in Computer Science from MIT in 2022."
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
# Decode results
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
predicted_labels = [model.config.id2label[label_id] for label_id in predictions[0]]
for token, label in zip(tokens, predicted_labels):
print(f"{token}: {label}")
```
---
## 📈 Intended Use Cases
* Resume parsing
* HR and recruitment platforms
* Talent analytics
* Job-matching engines
* NLP-based document processors
---
## 🏷️ Tags
`NER`, `transformers`, `huggingface`, `token-classification`, `roberta`, `resume-parser`, `nlp`, `named-entity-recognition`, `custom-dataset`, `career-data`, `information-extraction`
---
## 📁 Datasets & Training
This model was trained on a custom-labeled resume dataset containing various sections such as education, experience, projects, and skills. The dataset included `.txt`, `.pdf`, and `.docx` formats processed using SpaCy and PyMuPDF/Docx libraries.
*If you'd like to access the dataset or contribute, please contact the maintainer.*
---
## 📤 Model Hosted On
**Model Hub:** [https://huggingface.co/venkatasagar/NER-roBERTa-finetuned](https://huggingface.co/venkatasagar/NER-roBERTa-finetuned)
---
## 🤝 Contributing
Feel free to fork the repository and open issues or PRs to enhance the model or pipeline!
---
## 🧑💻 Maintainer
**Name:** Venkata Sagar
**Contact:** \[[[email protected]](mailto:venkatasagar,[email protected])] |
CHAAAKHDABug/model | CHAAAKHDABug | 2025-05-24T11:11:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:atlasia/XLM-RoBERTa-Morocco",
"base_model:finetune:atlasia/XLM-RoBERTa-Morocco",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-24T11:05:29Z | ---
library_name: transformers
license: mit
base_model: atlasia/XLM-RoBERTa-Morocco
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [atlasia/XLM-RoBERTa-Morocco](https://huggingface.co/atlasia/XLM-RoBERTa-Morocco) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
fahmiaziz/SmolLM2-135M-Instruct-Clinical-Note | fahmiaziz | 2025-05-24T11:11:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"clinical-note",
"summarization",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2025-05-24T11:10:12Z | ---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: transformers
model_name: SmolLM2-135M-Instruct-Clinical-Note
tags:
- generated_from_trainer
- clinical-note
- summarization
- trl
- sft
licence: license
---
# Model Card for SmolLM2-135M-Instruct-Clinical-Note
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fahmiaziz/SmolLM2-135M-Instruct-Clinical-Note", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Gswrtz/MNLP_M2_rag_model | Gswrtz | 2025-05-24T11:08:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T11:05:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yashz71/bert-finetuned-ner | yashz71 | 2025-05-24T11:08:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-24T09:56:57Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9317355371900826
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.940126751167445
- name: Accuracy
type: accuracy
value: 0.9861953258374051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9317
- Recall: 0.9487
- F1: 0.9401
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0776 | 1.0 | 1756 | 0.0666 | 0.8926 | 0.9305 | 0.9112 | 0.9813 |
| 0.0347 | 2.0 | 3512 | 0.0670 | 0.9306 | 0.9458 | 0.9382 | 0.9850 |
| 0.0213 | 3.0 | 5268 | 0.0612 | 0.9317 | 0.9487 | 0.9401 | 0.9862 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Jehex/Jibv8 | Jehex | 2025-05-24T11:05:33Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-02-21T09:54:17Z | ---
license: apache-2.0
---
|
LandCruiser/sn29_cold_2305_4 | LandCruiser | 2025-05-24T11:05:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T07:27:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gride29/flux-custom-smaller | gride29 | 2025-05-24T11:03:38Z | 279 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-14T02:14:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Flux Custom Smaller
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/gride29/flux-custom-smaller/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('gride29/flux-custom-smaller', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/gride29/flux-custom-smaller/discussions) to add images that show off what you’ve made with this LoRA.
|
WenFengg/ronaldo_o1_w6_k1 | WenFengg | 2025-05-24T11:02:23Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T10:55:12Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Berkayy4/gemma_small_ds | Berkayy4 | 2025-05-24T11:02:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T10:54:36Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma_small_ds
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma_small_ds
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Berkayy4/gemma_small_ds", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cytoe/dickbot-0.6B-r32-ft | cytoe | 2025-05-24T11:01:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T11:00:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Odohofre/q-FrozenLake-v1-4x4-noSlippery | Odohofre | 2025-05-24T10:58:17Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-23T23:42:24Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Odohofre/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ertghiu256/qwen3-4b-code-reasoning-gguf | ertghiu256 | 2025-05-24T10:56:57Z | 524 | 1 | null | [
"gguf",
"qwen3",
"code",
"unsloth",
"coding",
"cot",
"reasoning",
"text-generation",
"dataset:nvidia/OpenCodeReasoning",
"dataset:vicgalle/creative-rubrics-gpt-4.5-o3-R1",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T14:38:51Z | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
- vicgalle/creative-rubrics-gpt-4.5-o3-R1
base_model:
- unsloth/Qwen3-4B-GGUF
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- code
- unsloth
- coding
- cot
- reasoning
---
# Qwen 3 Code Reasoning

<small>Generated by reveai.org</small>
A small Qwen 3 4 billion parameter model trained on [nvidia/OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) for coding tasks.
For Coding, it is recommended to be in thinking mode.
## Recommended parameters:
- Context window : > 10000
- Temperature : 0.7
- Top_p : 60
- Top_k : 0.99
- Min_p : 0.05
## Strengths
- Code generation
- Logical question answering
## Drawbacks
- Overthinking
- Context Overflow
## Recommended Usage:
- [Ollama](https://ollama.com/)
- [Model at ollama](https://ollama.com/ertghiu256/qwen3-4b-code-reasoning)
- [LM Studio](https://lmstudio.ai/)
- [Llama.cpp](https://github.com/ggml-org/llama.cpp)
### SAFETENSOR VERSION
[safetensor](https://huggingface.co/ertghiu256/qwen3-4b-code-reasoning)
## Example Usage :
```
> Write me a snake game in python. Ensure all keybinding (WASD) work properly and all mechanics from the original snake game works. /think
<think>
Okay, I need to write a Snake game in Python where all the keybindings (WASD) work properly and all the original mechanics are included. Let me think about how to approach this.
First, I remember that the classic Snake game works by having a snake move in a direction, grow when it eats food, and die if it hits the wall or itself. The keybindings should allow the snake to change direction using W, A, S, D. But I need to make sure that the movement is smooth and that the snake can't reverse direction immediately, which might cause it to collide with itself right away.
So, the plan is to use the pygame library because it's commonly used for game development in Python. I'll need to initialize the game window, set up the display, handle the game loop, manage the snake's movement, check for collisions, and handle the food.
Let me outline the steps:
Import necessary modules: Pygame, sys, random for generating food, time for delay.
Initialize Pygame: Set up the display, clock, fonts, etc.
Define constants: Screen size, grid size (like 20x20), snake speed, colors.
Create the Snake class: Each segment of the snake. The snake moves in a direction, and each segment follows the previous one.
Create the Food class: Randomly placed on the screen, and when eaten, the snake grows.
Game loop:
Handle events (WASD for direction).
Move the snake.
Check for collisions (walls or self).
Check if food is eaten, then increase length.
Update the display.
Control the game speed with a clock.
Handle keybindings: Make sure that the snake can't reverse direction. So, if the current direction is right, the next can't be left, and so on.
Wait, but in the original Snake game, ...(Too long)
</think>
Here's a complete implementation of the **Snake Game** using the `pygame` library. The game allows the player to control the snake using the **WASD keys**, and the snake moves in the direction of the last key pressed. The game ends when the snake collides with the wall or itself, and the snake grows by one segment each time it eats the food.
---
### ✅ Features:
- **WASD Controls**:
- `W` → Up
- `A` → Left
- `S` → Down
- `D` → Right
- **Food Generation**:
- Randomly placed on a 32×24 grid (640×480 pixels).
- **Game Over**:
- Ends when the snake hits the wall or itself.
- **Speed Control**:
- Set with `clock.tick(10)` to control the game speed.
---
### 🧠 How It Works:
1. The snake starts with 3 segments.
2. The player uses `W`, `A`, `S`, `D` to change the direction.
3. The snake moves in the current direction, growing if it eats the food.
4. When the snake collides with the wall or itself, the game ends.
---
### 📦 Code Implementation
```python
import pygame
import sys
import random
# Initialize Pygame
pygame.init()
# Game settings
SCREEN_WIDTH = 640
SCREEN_HEIGHT = 480
BLOCK_SIZE = 20
GRID_WIDTH = SCREEN_WIDTH // BLOCK_SIZE
GRID_HEIGHT = SCREEN_HEIGHT // BLOCK_SIZE
# Colors
BLACK = (0, 0, 0)
GREEN = (0, 255, 0)
RED = (255, 0, 0)
# Initialize screen
screen = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
pygame.display.set_caption("Snake Game")
# Clock
clock = pygame.time.Clock()
# Snake
snake_segments = [(3, 3), (2, 3), (1, 3)]
direction = (1, 0) # Start moving right
# Food
def generate_food():
while True:
x = random.randint(0, GRID_WIDTH - 1)
y = random.randint(0, GRID_HEIGHT - 1)
if (x, y) not in snake_segments:
return (x, y)
food = generate_food()
# Game loop
running = True
ate_food = False
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_w:
direction = (0, -1)
elif event.key == pygame.K_a:
direction = (-1, 0)
elif event.key == pygame.K_s:
direction = (0, 1)
elif event.key == pygame.K_d:
direction = (1, 0)
# Move the snake
new_head = (snake_segments[0][0] + direction[0], snake_segments[0][1] + direction[1])
snake_segments.insert(0, new_head)
if not ate_food:
snake_segments.pop()
ate_food = False
# Check for collisions
if (new_head[0] < 0 or new_head[0] >= GRID_WIDTH or
new_head[1] < 0 or new_head[1] >= GRID_HEIGHT):
pygame.quit()
sys.exit()
if new_head in snake_segments[1:]:
pygame.quit()
sys.exit()
# Check if food is eaten
if new_head == food:
food = generate_food()
ate_food = True
# Draw everything
screen.fill(BLACK)
for segment in snake_segments:
pygame.draw.rect(screen, GREEN, (segment[0] * BLOCK_SIZE, segment[1] * BLOCK_SIZE, BLOCK_SIZE, BLOCK_SIZE))
pygame.draw.rect(screen, RED, (food[0] * BLOCK_SIZE, food[1] * BLOCK_SIZE, BLOCK_SIZE, BLOCK_SIZE))
pygame.display.flip()
# Control the game speed
clock.tick(10)
pygame.quit()
sys.exit()
```⠀
---
### 🧪 How to Run:
1. Install `pygame` using pip:
`pip install pygame`
2. Save the code in a file, e.g., `snake_game.py`.
3. Run the file using Python:
`python snake_game.py`
---
### 🚀 Tips:
- The game is fast-paced. You can slow it down by increasing the value in `clock.tick(10)` (e.g., `clock.tick(15)`).
- The snake starts in the middle of the screen, and the food is placed randomly.
Enjoy playing the **Snake Game**! 🐍🎮
``` |
calcuis/tensor-transfer-protocol | calcuis | 2025-05-24T10:56:20Z | 0 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2025-05-24T09:39:19Z | ---
license: mit
---
## tensor transfer protocol
- test pack
- pig architecture from [connector](https://huggingface.co/connector)
|
jrluo/bert-base-train10000 | jrluo | 2025-05-24T10:55:57Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-24T10:55:57Z | ---
license: apache-2.0
---
|
Saint5/hg_tutorial_food_not_food_text_classifier_distilbert_base_uncased | Saint5 | 2025-05-24T10:54:30Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T00:01:43Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hg_tutorial_food_not_food_text_classifier_distilbert_base_uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hg_tutorial_food_not_food_text_classifier_distilbert_base_uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4477 | 1.0 | 7 | 0.0822 | 1.0 |
| 0.0357 | 2.0 | 14 | 0.0071 | 1.0 |
| 0.0051 | 3.0 | 21 | 0.0023 | 1.0 |
| 0.0019 | 4.0 | 28 | 0.0012 | 1.0 |
| 0.0012 | 5.0 | 35 | 0.0009 | 1.0 |
| 0.001 | 6.0 | 42 | 0.0007 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0006 | 1.0 |
| 0.0007 | 8.0 | 56 | 0.0006 | 1.0 |
| 0.0007 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0007 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sarvamai/sarvam-m-q8-gguf | sarvamai | 2025-05-24T10:50:37Z | 0 | 0 | null | [
"gguf",
"base_model:sarvamai/sarvam-m",
"base_model:quantized:sarvamai/sarvam-m",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T09:18:57Z | ---
license: apache-2.0
base_model:
- sarvamai/sarvam-m
---
# Sarvam-M
<p align="center">
<a href="https://dashboard.sarvam.ai/playground"
target="_blank" rel="noopener noreferrer">
<img
src="https://img.shields.io/badge/🚀 Chat on Sarvam Playground-1488CC?style=for-the-badge&logo=rocket"
alt="Chat on Sarvam Playground"
/>
</a>
</p>
# Model Information
> [!Note]
> This repository contains gguf version of [`sarvam-m`](https://huggingface.co/sarvamai/sarvam-m) in q8 precision.
Learn more about sarvam-m in our detailed [blog post](https://www.sarvam.ai/blogs/sarvam-m).
# Running the model on a CPU
You can use the model on your local machine (without gpu) as explained [here](https://github.com/ggml-org/llama.cpp/tree/master/tools/main).
Example Command:
```
./build/bin/llama-cli -i -m /your/folder/path/sarvam-m-q8_0.gguf -c 8192 -t 16
``` |
mohhtl/165a9c7a-e55f-4fe4-a943-03436a3e3efa | mohhtl | 2025-05-24T10:43:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"generated_from_trainer",
"dataset:ee4eb606-53c8-44b0-bd60-8cc0f514aae9_test.json",
"dataset:ee4eb606-53c8-44b0-bd60-8cc0f514aae9_synth.json",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | 2025-05-24T09:50:48Z | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- generated_from_trainer
datasets:
- ee4eb606-53c8-44b0-bd60-8cc0f514aae9_test.json
- ee4eb606-53c8-44b0-bd60-8cc0f514aae9_synth.json
model-index:
- name: results/165a9c7a-e55f-4fe4-a943-03436a3e3efa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
dataset_prepared_path: results/ee4eb606-53c8-44b0-bd60-8cc0f514aae9_last_run_prepared
datasets:
- path: ee4eb606-53c8-44b0-bd60-8cc0f514aae9_test.json
type: &id001
field: null
field_input: null
field_instruction: instruct
field_output: output
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
- path: ee4eb606-53c8-44b0-bd60-8cc0f514aae9_synth.json
type: *id001
flash_attention: false
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
micro_batch_size: 32
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_bnb_8bit
output_dir: results/165a9c7a-e55f-4fe4-a943-03436a3e3efa
pad_to_sequence_len: true
sample_packing: false
save_total_limit: 1
saves_per_epoch: 4
sequence_len: 512
strict: false
test_datasets:
- path: ee4eb606-53c8-44b0-bd60-8cc0f514aae9_test.json
split: train
type: *id001
- path: ee4eb606-53c8-44b0-bd60-8cc0f514aae9_synth.json
split: train
type: *id001
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
warmup_ratio: 0.0
warmup_steps: 0
weight_decay: 0.0
```
</details><br>
# results/165a9c7a-e55f-4fe4-a943-03436a3e3efa
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the ee4eb606-53c8-44b0-bd60-8cc0f514aae9_test.json and the ee4eb606-53c8-44b0-bd60-8cc0f514aae9_synth.json datasets.
It achieves the following results on the evaluation set:
- Loss: 0.1343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 200.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2662 | 1.0 | 11 | 2.9886 |
| 2.8168 | 2.0 | 22 | 2.8644 |
| 2.6152 | 3.0 | 33 | 2.7696 |
| 2.459 | 4.0 | 44 | 2.6930 |
| 2.7944 | 5.0 | 55 | 2.6283 |
| 2.5208 | 6.0 | 66 | 2.5588 |
| 2.7786 | 7.0 | 77 | 2.4949 |
| 2.6515 | 8.0 | 88 | 2.4340 |
| 2.2674 | 9.0 | 99 | 2.3705 |
| 2.5577 | 10.0 | 110 | 2.3198 |
| 2.4592 | 11.0 | 121 | 2.2668 |
| 2.4093 | 12.0 | 132 | 2.2158 |
| 2.3639 | 13.0 | 143 | 2.1639 |
| 2.5389 | 14.0 | 154 | 2.1105 |
| 2.3411 | 15.0 | 165 | 2.0671 |
| 2.1697 | 16.0 | 176 | 2.0157 |
| 2.3553 | 17.0 | 187 | 1.9677 |
| 2.0474 | 18.0 | 198 | 1.9243 |
| 2.0354 | 19.0 | 209 | 1.8801 |
| 2.0952 | 20.0 | 220 | 1.8460 |
| 2.1412 | 21.0 | 231 | 1.7987 |
| 1.8789 | 22.0 | 242 | 1.7561 |
| 2.1106 | 23.0 | 253 | 1.7205 |
| 2.1634 | 24.0 | 264 | 1.6887 |
| 1.8412 | 25.0 | 275 | 1.6499 |
| 2.0144 | 26.0 | 286 | 1.6135 |
| 1.6635 | 27.0 | 297 | 1.5774 |
| 1.9489 | 28.0 | 308 | 1.5412 |
| 1.9128 | 29.0 | 319 | 1.5123 |
| 1.9403 | 30.0 | 330 | 1.4803 |
| 1.8155 | 31.0 | 341 | 1.4502 |
| 1.677 | 32.0 | 352 | 1.4157 |
| 1.711 | 33.0 | 363 | 1.3892 |
| 1.8867 | 34.0 | 374 | 1.3625 |
| 1.665 | 35.0 | 385 | 1.3331 |
| 1.6473 | 36.0 | 396 | 1.2994 |
| 1.6149 | 37.0 | 407 | 1.2782 |
| 1.5573 | 38.0 | 418 | 1.2529 |
| 1.4828 | 39.0 | 429 | 1.2269 |
| 1.6874 | 40.0 | 440 | 1.2032 |
| 1.4367 | 41.0 | 451 | 1.1772 |
| 1.8244 | 42.0 | 462 | 1.1638 |
| 1.6759 | 43.0 | 473 | 1.1338 |
| 1.441 | 44.0 | 484 | 1.1102 |
| 1.4036 | 45.0 | 495 | 1.0853 |
| 1.3478 | 46.0 | 506 | 1.0654 |
| 1.364 | 47.0 | 517 | 1.0454 |
| 1.3623 | 48.0 | 528 | 1.0337 |
| 1.5279 | 49.0 | 539 | 1.0034 |
| 1.4282 | 50.0 | 550 | 0.9823 |
| 1.6248 | 51.0 | 561 | 0.9642 |
| 1.45 | 52.0 | 572 | 0.9440 |
| 1.3812 | 53.0 | 583 | 0.9215 |
| 1.346 | 54.0 | 594 | 0.9083 |
| 1.2221 | 55.0 | 605 | 0.8916 |
| 1.2598 | 56.0 | 616 | 0.8744 |
| 1.2494 | 57.0 | 627 | 0.8573 |
| 1.4734 | 58.0 | 638 | 0.8489 |
| 1.2594 | 59.0 | 649 | 0.8275 |
| 1.2402 | 60.0 | 660 | 0.8099 |
| 1.21 | 61.0 | 671 | 0.7992 |
| 1.2709 | 62.0 | 682 | 0.7796 |
| 1.1257 | 63.0 | 693 | 0.7672 |
| 1.2183 | 64.0 | 704 | 0.7524 |
| 1.258 | 65.0 | 715 | 0.7542 |
| 1.1919 | 66.0 | 726 | 0.7275 |
| 1.2569 | 67.0 | 737 | 0.7154 |
| 1.1036 | 68.0 | 748 | 0.7091 |
| 1.2037 | 69.0 | 759 | 0.6970 |
| 1.1043 | 70.0 | 770 | 0.6761 |
| 1.1246 | 71.0 | 781 | 0.6615 |
| 1.0137 | 72.0 | 792 | 0.6515 |
| 1.0341 | 73.0 | 803 | 0.6362 |
| 1.079 | 74.0 | 814 | 0.6275 |
| 1.1389 | 75.0 | 825 | 0.6151 |
| 1.1196 | 76.0 | 836 | 0.6035 |
| 1.1468 | 77.0 | 847 | 0.5954 |
| 1.1121 | 78.0 | 858 | 0.5827 |
| 1.1423 | 79.0 | 869 | 0.5743 |
| 1.0607 | 80.0 | 880 | 0.5638 |
| 0.9178 | 81.0 | 891 | 0.5565 |
| 0.9652 | 82.0 | 902 | 0.5403 |
| 1.0683 | 83.0 | 913 | 0.5394 |
| 0.9669 | 84.0 | 924 | 0.5302 |
| 0.9991 | 85.0 | 935 | 0.5213 |
| 1.026 | 86.0 | 946 | 0.5099 |
| 0.9144 | 87.0 | 957 | 0.4992 |
| 0.8759 | 88.0 | 968 | 0.4900 |
| 0.8332 | 89.0 | 979 | 0.4869 |
| 0.9221 | 90.0 | 990 | 0.4759 |
| 0.9588 | 91.0 | 1001 | 0.4711 |
| 0.955 | 92.0 | 1012 | 0.4592 |
| 0.9948 | 93.0 | 1023 | 0.4529 |
| 0.9039 | 94.0 | 1034 | 0.4493 |
| 0.9002 | 95.0 | 1045 | 0.4417 |
| 0.8983 | 96.0 | 1056 | 0.4344 |
| 0.8587 | 97.0 | 1067 | 0.4295 |
| 0.783 | 98.0 | 1078 | 0.4208 |
| 0.6812 | 99.0 | 1089 | 0.4087 |
| 0.9963 | 100.0 | 1100 | 0.4070 |
| 0.9334 | 101.0 | 1111 | 0.4072 |
| 0.8602 | 102.0 | 1122 | 0.3948 |
| 0.759 | 103.0 | 1133 | 0.3908 |
| 0.8113 | 104.0 | 1144 | 0.3847 |
| 0.826 | 105.0 | 1155 | 0.3801 |
| 0.8672 | 106.0 | 1166 | 0.3734 |
| 0.8192 | 107.0 | 1177 | 0.3684 |
| 0.7564 | 108.0 | 1188 | 0.3613 |
| 0.8423 | 109.0 | 1199 | 0.3543 |
| 0.8328 | 110.0 | 1210 | 0.3535 |
| 0.8038 | 111.0 | 1221 | 0.3502 |
| 0.6775 | 112.0 | 1232 | 0.3450 |
| 0.8943 | 113.0 | 1243 | 0.3384 |
| 0.7262 | 114.0 | 1254 | 0.3350 |
| 0.816 | 115.0 | 1265 | 0.3302 |
| 0.7728 | 116.0 | 1276 | 0.3271 |
| 0.7633 | 117.0 | 1287 | 0.3266 |
| 0.8182 | 118.0 | 1298 | 0.3156 |
| 0.7854 | 119.0 | 1309 | 0.3085 |
| 0.7675 | 120.0 | 1320 | 0.3069 |
| 0.6357 | 121.0 | 1331 | 0.3052 |
| 0.7196 | 122.0 | 1342 | 0.2975 |
| 0.7605 | 123.0 | 1353 | 0.2948 |
| 0.8025 | 124.0 | 1364 | 0.2919 |
| 0.6853 | 125.0 | 1375 | 0.2842 |
| 0.7089 | 126.0 | 1386 | 0.2872 |
| 0.6575 | 127.0 | 1397 | 0.2829 |
| 0.6929 | 128.0 | 1408 | 0.2791 |
| 0.6611 | 129.0 | 1419 | 0.2762 |
| 0.6239 | 130.0 | 1430 | 0.2727 |
| 0.6972 | 131.0 | 1441 | 0.2675 |
| 0.7759 | 132.0 | 1452 | 0.2663 |
| 0.6491 | 133.0 | 1463 | 0.2600 |
| 0.7345 | 134.0 | 1474 | 0.2601 |
| 0.6856 | 135.0 | 1485 | 0.2552 |
| 0.667 | 136.0 | 1496 | 0.2515 |
| 0.6332 | 137.0 | 1507 | 0.2461 |
| 0.679 | 138.0 | 1518 | 0.2453 |
| 0.6876 | 139.0 | 1529 | 0.2472 |
| 0.6033 | 140.0 | 1540 | 0.2404 |
| 0.7307 | 141.0 | 1551 | 0.2373 |
| 0.7121 | 142.0 | 1562 | 0.2378 |
| 0.6913 | 143.0 | 1573 | 0.2307 |
| 0.6798 | 144.0 | 1584 | 0.2276 |
| 0.5749 | 145.0 | 1595 | 0.2261 |
| 0.7103 | 146.0 | 1606 | 0.2262 |
| 0.6538 | 147.0 | 1617 | 0.2211 |
| 0.5909 | 148.0 | 1628 | 0.2226 |
| 0.6153 | 149.0 | 1639 | 0.2152 |
| 0.5861 | 150.0 | 1650 | 0.2171 |
| 0.6768 | 151.0 | 1661 | 0.2132 |
| 0.5811 | 152.0 | 1672 | 0.2121 |
| 0.6094 | 153.0 | 1683 | 0.2112 |
| 0.5981 | 154.0 | 1694 | 0.2039 |
| 0.589 | 155.0 | 1705 | 0.2022 |
| 0.5843 | 156.0 | 1716 | 0.2043 |
| 0.6211 | 157.0 | 1727 | 0.2015 |
| 0.5605 | 158.0 | 1738 | 0.1969 |
| 0.5713 | 159.0 | 1749 | 0.1970 |
| 0.6631 | 160.0 | 1760 | 0.1915 |
| 0.5788 | 161.0 | 1771 | 0.1908 |
| 0.5825 | 162.0 | 1782 | 0.1904 |
| 0.5691 | 163.0 | 1793 | 0.1858 |
| 0.6619 | 164.0 | 1804 | 0.1894 |
| 0.6349 | 165.0 | 1815 | 0.1833 |
| 0.5825 | 166.0 | 1826 | 0.1834 |
| 0.5266 | 167.0 | 1837 | 0.1820 |
| 0.5802 | 168.0 | 1848 | 0.1794 |
| 0.5668 | 169.0 | 1859 | 0.1777 |
| 0.5162 | 170.0 | 1870 | 0.1763 |
| 0.5192 | 171.0 | 1881 | 0.1749 |
| 0.5117 | 172.0 | 1892 | 0.1718 |
| 0.5675 | 173.0 | 1903 | 0.1699 |
| 0.5288 | 174.0 | 1914 | 0.1686 |
| 0.5507 | 175.0 | 1925 | 0.1677 |
| 0.5272 | 176.0 | 1936 | 0.1665 |
| 0.5842 | 177.0 | 1947 | 0.1657 |
| 0.5611 | 178.0 | 1958 | 0.1635 |
| 0.5677 | 179.0 | 1969 | 0.1612 |
| 0.5198 | 180.0 | 1980 | 0.1608 |
| 0.4778 | 181.0 | 1991 | 0.1586 |
| 0.6142 | 182.0 | 2002 | 0.1561 |
| 0.4798 | 183.0 | 2013 | 0.1545 |
| 0.5122 | 184.0 | 2024 | 0.1552 |
| 0.573 | 185.0 | 2035 | 0.1553 |
| 0.6015 | 186.0 | 2046 | 0.1526 |
| 0.5704 | 187.0 | 2057 | 0.1520 |
| 0.5317 | 188.0 | 2068 | 0.1481 |
| 0.5398 | 189.0 | 2079 | 0.1506 |
| 0.5454 | 190.0 | 2090 | 0.1458 |
| 0.5383 | 191.0 | 2101 | 0.1440 |
| 0.4189 | 192.0 | 2112 | 0.1438 |
| 0.5139 | 193.0 | 2123 | 0.1446 |
| 0.5482 | 194.0 | 2134 | 0.1416 |
| 0.4866 | 195.0 | 2145 | 0.1414 |
| 0.5426 | 196.0 | 2156 | 0.1402 |
| 0.4471 | 197.0 | 2167 | 0.1366 |
| 0.5174 | 198.0 | 2178 | 0.1372 |
| 0.4742 | 199.0 | 2189 | 0.1362 |
| 0.4645 | 200.0 | 2200 | 0.1343 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
New-Caitlin-Clark-dance-shower/FULL.VIDEO.LINK.Caitlin.Clark.dance.shower.Viral.Video.Leaks.Official | New-Caitlin-Clark-dance-shower | 2025-05-24T10:43:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T10:42:56Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
TanAlexanderlz/ALL_NoCrop_ori16F-4B16F | TanAlexanderlz | 2025-05-24T10:40:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-24T09:04:19Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ALL_NoCrop_ori16F-4B16F
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_NoCrop_ori16F-4B16F
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9942
- Accuracy: 0.7530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2880
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5179 | 0.0670 | 193 | 0.5087 | 0.7378 |
| 0.4627 | 1.0670 | 386 | 0.4823 | 0.7683 |
| 0.6973 | 2.0670 | 579 | 0.6752 | 0.7256 |
| 0.6745 | 3.0670 | 772 | 1.7145 | 0.6585 |
| 0.4488 | 4.0670 | 965 | 0.7915 | 0.7988 |
| 0.0054 | 5.0670 | 1158 | 1.2858 | 0.7805 |
| 0.2292 | 6.0670 | 1351 | 1.0276 | 0.75 |
| 0.0718 | 7.0670 | 1544 | 1.1518 | 0.7622 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
WenFengg/ronaldo_o1_5 | WenFengg | 2025-05-24T10:40:07Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T10:29:14Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tvpavan/sarvam-m-mlx-fp16 | tvpavan | 2025-05-24T10:39:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mlx",
"conversational",
"en",
"bn",
"hi",
"kn",
"gu",
"mr",
"ml",
"or",
"pa",
"ta",
"te",
"base_model:sarvamai/sarvam-m",
"base_model:finetune:sarvamai/sarvam-m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T10:37:19Z | ---
library_name: transformers
license: apache-2.0
language:
- en
- bn
- hi
- kn
- gu
- mr
- ml
- or
- pa
- ta
- te
base_model: sarvamai/sarvam-m
base_model_relation: finetune
tags:
- mlx
---
# tvpavan/sarvam-m-mlx-fp16
The Model [tvpavan/sarvam-m-mlx-fp16](https://huggingface.co/tvpavan/sarvam-m-mlx-fp16) was converted to MLX format from [sarvamai/sarvam-m](https://huggingface.co/sarvamai/sarvam-m) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("tvpavan/sarvam-m-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
New-Caitlin-Clark-dance-shower-Viral-Video/wATCH.Caitlin.Clark.dance.shower.viral.video.original.Link.Official | New-Caitlin-Clark-dance-shower-Viral-Video | 2025-05-24T10:36:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T10:25:43Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
phospho-app/omourier-gr00t-Lego_bleu-bh3b4 | phospho-app | 2025-05-24T10:31:29Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-24T09:48:19Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [omourier/Lego_bleu](https://huggingface.co/datasets/omourier/Lego_bleu)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 27
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
hannesvgel/race-albert-v2 | hannesvgel | 2025-05-24T10:29:10Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"multiple-choice",
"generated_from_trainer",
"dataset:ehovy/race",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2025-05-22T08:52:11Z | ---
library_name: transformers
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results_albert
results: []
datasets:
- ehovy/race
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# race-albert-v2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the [race dataset(middle)](https://huggingface.co/datasets/ehovy/race).
It achieves the following results on the test set:
- Loss: 0.8710
- Accuracy: 0.7089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8709 | 1.0 | 3178 | 0.8257 | 0.6769 |
| 0.6377 | 2.0 | 6356 | 0.8329 | 0.7152 |
| 0.3548 | 3.0 | 9534 | 1.0367 | 0.7124 |
| 0.1412 | 4.0 | 12712 | 1.5380 | 0.7145 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
dougiefresh/jade_qwen_4b | dougiefresh | 2025-05-24T10:27:50Z | 39 | 0 | null | [
"safetensors",
"qwen3",
"logic",
"rhetoric",
"math",
"programming",
"aarch64",
"c",
"rust",
"nushell",
"grammar",
"en",
"dataset:dougiefresh/grammar_logic_rhetoric_and_math",
"dataset:dougiefresh/systems_programming_and_administration",
"dataset:dougiefresh/systems_programming_code_conversations",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-23T15:46:00Z | ---
license: cc-by-nc-sa-4.0
datasets:
- dougiefresh/grammar_logic_rhetoric_and_math
- dougiefresh/systems_programming_and_administration
- dougiefresh/systems_programming_code_conversations
language:
- en
base_model:
- Qwen/Qwen3-4B
tags:
- logic
- rhetoric
- math
- programming
- aarch64
- c
- rust
- nushell
- grammar
--- |
BAAI/RoboBrain | BAAI | 2025-05-24T10:25:59Z | 1,388 | 17 | null | [
"safetensors",
"llava_onevision",
"en",
"dataset:BAAI/ShareRobot",
"dataset:lmms-lab/LLaVA-OneVision-Data",
"arxiv:2502.21257",
"license:apache-2.0",
"region:us"
] | null | 2025-03-27T03:20:39Z | ---
license: apache-2.0
datasets:
- BAAI/ShareRobot
- lmms-lab/LLaVA-OneVision-Data
language:
- en
---
<div align="center">
<img src="https://github.com/FlagOpen/RoboBrain/raw/main/assets/logo.jpg" width="400"/>
</div>
# [CVPR 25] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete.
<p align="center">
</a>  ⭐️ <a href="https://superrobobrain.github.io/">Project</a></a>   |   🤗 <a href="https://huggingface.co/BAAI/RoboBrain/">Hugging Face</a>   |   🤖 <a href="https://www.modelscope.cn/models/BAAI/RoboBrain/files/">ModelScope</a>   |   🌎 <a href="https://github.com/FlagOpen/ShareRobot">Dataset</a>   |   📑 <a href="http://arxiv.org/abs/2502.21257">Paper</a>   |   💬 <a href="./assets/wechat.png">WeChat</a>
</p>
<p align="center">
</a>  🎯 <a href="">RoboOS (Coming Soon)</a>: An Efficient Open-Source Multi-Robot Coordination System for RoboBrain.
</p>
<p align="center">
</a>  🎯 <a href="https://tanhuajie.github.io/ReasonRFT/">Reason-RFT</a>: Exploring a New RFT Paradigm to Enhance RoboBrain's Visual Reasoning Capabilities.
</p>
## 🔥 Overview
Recent advancements in Multimodal Large Language Models (MLLMs) have shown remarkable capabilities across various multimodal contexts. However, their application in robotic scenarios, particularly for long-horizon manipulation tasks, reveals significant limitations. These limitations arise from the current MLLMs lacking three essential robotic brain capabilities: **(1) Planning Capability**, which involves decomposing complex manipulation instructions into manageable sub-tasks; **(2) Affordance Perception**, the ability to recognize and interpret the affordances of interactive objects; and **(3) Trajectory Prediction**, the foresight to anticipate the complete manipulation trajectory necessary for successful execution. To enhance the robotic brain's core capabilities from abstract to concrete, we introduce ShareRobot, a high-quality heterogeneous dataset that labels multi-dimensional information such as task planning, object affordance, and end-effector trajectory. ShareRobot's diversity and accuracy have been meticulously refined by three human annotators. Building on this dataset, we developed RoboBrain, an MLLM-based model that combines robotic and general multi-modal data, utilizes a multi-stage training strategy, and incorporates long videos and high-resolution images to improve its robotic manipulation capabilities. Extensive experiments demonstrate that RoboBrain achieves state-of-the-art performance across various robotic tasks, highlighting its potential to advance robotic brain capabilities.

## 🚀 Features
This repository supports:
- **`Data Preparation`**: Please refer to [Dataset Preparation](https://github.com/FlagOpen/ShareRobot) for how to prepare the dataset.
- **`Training for RoboBrain`**: Please refer to [Training Section](#Training) for the usage of training scripts.
- **`Support HF/VLLM Inference`**: Please see [Inference Section](#Inference), now we support inference with [VLLM](https://github.com/vllm-project/vllm).
- **`Evaluation for RoboBrain`**: Please refer to [Evaluation Section](#Evaluation) for how to prepare the benchmarks.
- **`ShareRobot Generation`**: Please refer to [ShareRobot](https://github.com/FlagOpen/ShareRobot) for details.
## 🗞️ News
- **`2025-04-04`**: 🤗 We have released [Trajectory Checkpoint](https://huggingface.co/BAAI/RoboBrain-LoRA-Trajectory/) in Huggingface.
- **`2025-03-29`**: 🤗 We have released [Affordance Checkpoint](https://huggingface.co/BAAI/RoboBrain-LoRA-Affordance/) in Huggingface.
- **`2025-03-27`**: 🤗 We have released [Planning Checkpoint](https://huggingface.co/BAAI/RoboBrain/) in Huggingface.
- **`2025-03-26`**: 🔥 We have released the [RoboBrain](https://github.com/FlagOpen/RoboBrain/) repository.
- **`2025-02-27`**: 🌍 Our [RoboBrain](http://arxiv.org/abs/2502.21257/) was accepted to CVPR2025.
## 📆 Todo
- [x] Release scripts for model training and inference.
- [x] Release Planning checkpoint.
- [x] Release Affordance checkpoint.
- [x] Release ShareRobot dataset.
- [x] Release Trajectory checkpoint.
- [ ] Release evaluation scripts for Benchmarks.
- [ ] Training more powerful **Robobrain-v2**.
## 🤗 Models
- **[`Base Planning Model`](https://huggingface.co/BAAI/RoboBrain/)**: The model was trained on general datasets in Stages 1–2 and on the Robotic Planning dataset in Stage 3, which is designed for Planning prediction.
- **[`A-LoRA for Affordance`](https://huggingface.co/BAAI/RoboBrain-LoRA-Affordance/)**: Based on the Base Planning Model, Stage 4 involves LoRA-based training with our Affordance dataset to predict affordance.
- **[`T-LoRA for Trajectory`](https://huggingface.co/BAAI/RoboBrain-LoRA-Trajectory/)**: Based on the Base Planning Model, Stage 4 involves LoRA-based training with our Trajectory dataset to predict trajectory.

| Models | Checkpoint | Description |
|----------------------|----------------------------------------------------------------|------------------------------------------------------------|
| Planning Model | [🤗 Planning CKPTs](https://huggingface.co/BAAI/RoboBrain/) | Used for Planning prediction in our paper |
| Affordance (A-LoRA) | [🤗 Affordance CKPTs](https://huggingface.co/BAAI/RoboBrain-LoRA-Affordance/) | Used for Affordance prediction in our paper |
| Trajectory (T-LoRA) | [🤗 Trajectory CKPTs](https://huggingface.co/BAAI/RoboBrain-LoRA-Trajectory/) | Used for Trajectory prediction in our paper |
## 🛠️ Setup
```bash
# clone repo.
git clone https://github.com/FlagOpen/RoboBrain.git
cd RoboBrain
# build conda env.
conda create -n robobrain python=3.10
conda activate robobrain
pip install -r requirements.txt
```
## <a id="Training"> 🤖 Training</a>
### 1. Data Preparation
```bash
# Modify datasets for Stage 1, please refer to:
- yaml_path: scripts/train/yaml/stage_1_0.yaml
# Modify datasets for Stage 1.5, please refer to:
- yaml_path: scripts/train/yaml/stage_1_5.yaml
# Modify datasets for Stage 2_si, please refer to:
- yaml_path: scripts/train/yaml/stage_2_si.yaml
# Modify datasets for Stage 2_ov, please refer to:
- yaml_path: scripts/train/yaml/stage_2_ov.yaml
# Modify datasets for Stage 3_plan, please refer to:
- yaml_path: scripts/train/yaml/stage_3_planning.yaml
# Modify datasets for Stage 4_aff, please refer to:
- yaml_path: scripts/train/yaml/stage_4_affordance.yaml
# Modify datasets for Stage 4_traj, please refer to:
- yaml_path: scripts/train/yaml/stage_4_trajectory.yaml
```
**Note:** The sample format in each json file should be like:
```json
{
"id": "xxxx",
"image": [
"image1.png",
"image2.png",
],
"conversations": [
{
"from": "human",
"value": "<image>\n<image>\nAre there numerous dials near the bottom left of the tv?"
},
{
"from": "gpt",
"value": "Yes. The sun casts shadows ... a serene, clear sky."
}
]
},
```
### 2. Training
```bash
# Training on Stage 1:
bash scripts/train/stage_1_0_pretrain.sh
# Training on Stage 1.5:
bash scripts/train/stage_1_5_direct_finetune.sh
# Training on Stage 2_si:
bash scripts/train/stage_2_0_resume_finetune_si.sh
# Training on Stage 2_ov:
bash scripts/train/stage_2_0_resume_finetune_ov.sh
# Training on Stage 3_plan:
bash scripts/train/stage_3_0_resume_finetune_robo.sh
# Training on Stage 4_aff:
bash scripts/train/stage_4_0_resume_finetune_lora_a.sh
# Training on Stage 4_traj:
bash scripts/train/stage_4_0_resume_finetune_lora_t.sh
```
**Note:** Please change the environment variables (e.g. *DATA_PATH*, *IMAGE_FOLDER*, *PREV_STAGE_CHECKPOINT*) in the script to your own.
### 3. Convert original weights to HF weights
```bash
# Planning Model
python model/llava_utils/convert_robobrain_to_hf.py --model_dir /path/to/original/checkpoint/ --dump_path /path/to/output/
# A-LoRA & T-RoRA
python model/llava_utils/convert_lora_weights_to_hf.py --model_dir /path/to/original/checkpoint/ --dump_path /path/to/output/
```
## <a id="Inference">⭐️ Inference</a>
### 1. Usage for Planning Prediction
#### Option 1: HF inference
```python
from inference import SimpleInference
model_id = "BAAI/RoboBrain"
model = SimpleInference(model_id)
prompt = "Given the obiects in the image, if you are required to complete the task \"Put the apple in the basket\", what is your detailed plan? Write your plan and explain it in detail, using the following format: Step_1: xxx\nStep_2: xxx\n ...\nStep_n: xxx\n"
image = "./assets/demo/planning.png"
pred = model.inference(prompt, image, do_sample=True)
print(f"Prediction: {pred}")
'''
Prediction: (as an example)
Step_1: Move to the apple. Move towards the apple on the table.
Step_2: Pick up the apple. Grab the apple and lift it off the table.
Step_3: Move towards the basket. Move the apple towards the basket without dropping it.
Step_4: Put the apple in the basket. Place the apple inside the basket, ensuring it is in a stable position.
'''
```
#### Option 2: VLLM inference
Install and launch VLLM
```bash
# Install vllm package
pip install vllm==0.6.6.post1
# Launch Robobrain with vllm
python -m vllm.entrypoints.openai.api_server --model BAAI/RoboBrain --served-model-name robobrain --max_model_len 16384 --limit_mm_per_prompt image=8
```
Run python script as example:
```python
from openai import OpenAI
import base64
openai_api_key = "robobrain-123123"
openai_api_base = "http://127.0.0.1:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
prompt = "Given the obiects in the image, if you are required to complete the task \"Put the apple in the basket\", what is your detailed plan? Write your plan and explain it in detail, using the following format: Step_1: xxx\nStep_2: xxx\n ...\nStep_n: xxx\n"
image = "./assets/demo/planning.png"
with open(image, "rb") as f:
encoded_image = base64.b64encode(f.read())
encoded_image = encoded_image.decode("utf-8")
base64_img = f"data:image;base64,{encoded_image}"
response = client.chat.completions.create(
model="robobrain",
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": base64_img}},
{"type": "text", "text": prompt},
],
},
]
)
content = response.choices[0].message.content
print(content)
'''
Prediction: (as an example)
Step_1: Move to the apple. Move towards the apple on the table.
Step_2: Pick up the apple. Grab the apple and lift it off the table.
Step_3: Move towards the basket. Move the apple towards the basket without dropping it.
Step_4: Put the apple in the basket. Place the apple inside the basket, ensuring it is in a stable position.
'''
```
### 2. Usage for Affordance Prediction
```python
from inference import SimpleInference
model_id = "BAAI/RoboBrain"
lora_id = "BAAI/RoboBrain-LoRA-Affordance"
model = SimpleInference(model_id, lora_id)
# Example 1:
prompt = "You are a robot using the joint control. The task is \"pick_up the suitcase\". Please predict a possible affordance area of the end effector?"
image = "./assets/demo/affordance_1.jpg"
pred = model.inference(prompt, image, do_sample=False)
print(f"Prediction: {pred}")
'''
Prediction: [0.733, 0.158, 0.845, 0.263]
'''
# Example 2:
prompt = "You are a robot using the joint control. The task is \"push the bicycle\". Please predict a possible affordance area of the end effector?"
image = "./assets/demo/affordance_2.jpg"
pred = model.inference(prompt, image, do_sample=False)
print(f"Prediction: {pred}")
'''
Prediction: [0.600, 0.127, 0.692, 0.227]
'''
```

### 3. Usage for Trajectory Prediction
```python
# please refer to https://github.com/FlagOpen/RoboBrain
from inference import SimpleInference
model_id = "BAAI/RoboBrain"
lora_id = "BAAI/RoboBrain-LoRA-Affordance"
model = SimpleInference(model_id, lora_id)
# Example 1:
prompt = "You are a robot using the joint control. The task is \"reach for the cloth\". Please predict up to 10 key trajectory points to complete the task. Your answer should be formatted as a list of tuples, i.e. [[x1, y1], [x2, y2], ...], where each tuple contains the x and y coordinates of a point."
image = "./assets/demo/trajectory_1.jpg"
pred = model.inference(prompt, image, do_sample=False)
print(f"Prediction: {pred}")
'''
Prediction: [[0.781, 0.305], [0.688, 0.344], [0.570, 0.344], [0.492, 0.312]]
'''
# Example 2:
prompt = "You are a robot using the joint control. The task is \"reach for the grapes\". Please predict up to 10 key trajectory points to complete the task. Your answer should be formatted as a list of tuples, i.e. [[x1, y1], [x2, y2], ...], where each tuple contains the x and y coordinates of a point."
image = "./assets/demo/trajectory_2.jpg"
pred = model.inference(prompt, image, do_sample=False)
print(f"Prediction: {pred}")
'''
Prediction: [[0.898, 0.352], [0.766, 0.344], [0.625, 0.273], [0.500, 0.195]]
'''
```
## <a id="Evaluation">🤖 Evaluation</a>
*Coming Soon ...*

<!-- <div align="center">
<img src="https://github.com/FlagOpen/RoboBrain/blob/main/assets/result.png" />
</div> -->
## 😊 Acknowledgement
We would like to express our sincere gratitude to the developers and contributors of the following projects:
1. [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT): The comprehensive codebase for training Vision-Language Models (VLMs).
2. [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval): A powerful evaluation tool for Vision-Language Models (VLMs).
3. [vllm](https://github.com/vllm-project/vllm): A high-throughput and memory-efficient LLMs/VLMs inference engine.
4. [OpenEQA](https://github.com/facebookresearch/open-eqa): A wonderful benchmark for Embodied Question Answering.
5. [RoboVQA](https://github.com/google-deepmind/robovqa): Provide high-level reasoning models and datasets for robotics applications.
Their outstanding contributions have played a pivotal role in advancing our research and development initiatives.
## 📑 Citation
If you find this project useful, welcome to cite us.
```bib
@article{ji2025robobrain,
title={RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete},
author={Ji, Yuheng and Tan, Huajie and Shi, Jiayu and Hao, Xiaoshuai and Zhang, Yuan and Zhang, Hengyuan and Wang, Pengwei and Zhao, Mengdi and Mu, Yao and An, Pengju and others},
journal={arXiv preprint arXiv:2502.21257},
year={2025}
}
```
|
doguilmak/cityscapes-controlnet-sd15 | doguilmak | 2025-05-24T10:21:51Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"controlnet",
"stable-diffusion",
"conditional-generation",
"segmentation",
"image-to-image",
"en",
"arxiv:2302.05543",
"base_model:lllyasviel/sd-controlnet-seg",
"base_model:adapter:lllyasviel/sd-controlnet-seg",
"license:mit",
"region:us"
] | image-to-image | 2025-05-18T15:00:30Z | ---
license: mit
language:
- en
metrics:
- mse
base_model:
- lllyasviel/sd-controlnet-seg
pipeline_tag: image-to-image
tags:
- controlnet
- stable-diffusion
- conditional-generation
- segmentation
---
# Model Card for Cityscapes ControlNet with Stable Diffusion v1.5

This model is a fine-tuned version of ControlNet built on top of **Stable Diffusion v1.5**, specifically conditioned on **semantic segmentation maps** from the **Cityscapes dataset**. It enables structure-aware image generation by combining natural language prompts with dense pixel-level guidance in the form of segmentation masks. The result is highly controllable generation of realistic urban street scenes that align with both spatial layouts and descriptive context.
## Model Description
- **Base Model**: [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- **Control Type**: Semantic segmentation maps (Cityscapes-style RGB masks)
- **Architecture**: U-Net + ControlNet adapter + Variational Autoencoder (VAE) + CLIP Text Encoder (ViT-L/14)
- **Training Epochs**: 50 full passes over the training data
- **Training Dataset**: 3475 annotated image-label pairs from the Cityscapes dataset (train + val)
- **Resolution**: Trained at 256×256 resolution
- **Hardware**: NVIDIA A100 40GB GPU — total training time was approximately 2 hours
- **Loss Function**: Mean Squared Error (MSE) between predicted and true noise vectors (used in DDPM training)
The ControlNet branches were trained while freezing the base Stable Diffusion weights. This setup maintains prior knowledge from the original diffusion model while specializing its structure conditioning through segmentation.
## Usage
This model is available via the `diffusers` library. Here's how to load and use it:
```python
from diffusers import StableDiffusionControlNetPipeline
import torch
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"doguilmak/cityscapes-controlnet-sd15",
torch_dtype=torch.float32,
safety_checker=None
)
pipe.to("cuda")
# Load your segmentation map (RGB format expected)
from PIL import Image
control = Image.open("segmentation_map.png").convert("RGB")
# Run generation
result = pipe(
prompt="a detailed urban street, cinematic lighting",
negative_prompt="blurry, distorted",
image=control,
control_image=control,
num_inference_steps=50,
guidance_scale=9,
output_type="pil"
).images[0]
result.save("result.png")
```
## Example Outputs
Input Segmentation Map

## Limitations
- The model was trained on **256×256** resolution; higher-resolution inference may lead to artifacts unless resized inputs are used.
- It performs best on scenes that resemble **urban environments**, such as city streets and buildings.
- The input control image must closely resemble **Cityscapes segmentation formats** (classes and layout).
## License
This stable diffusion base model is distributed under the [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license), which allows commercial and non-commercial use with certain restrictions.
Our model is distributed under the [MIT license](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md).
## References
- **ControlNet Segmentation Model**: [lllyasviel/sd-controlnet-seg @ Hugging Face](https://huggingface.co/lllyasviel/sd-controlnet-seg)
- **ControlNet Paper**: Y. Zhao _et al._, “Adding Conditional Control to Text-to-Image Diffusion Models,” _arXiv preprint_ arXiv:2302.05543, 2023.
|
vmpsergio/ac512c54-bb35-466f-8a87-2858e420e2d9 | vmpsergio | 2025-05-24T10:17:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-24T09:31:46Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ac512c54-bb35-466f-8a87-2858e420e2d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- aa208f6e880a6925_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: vmpsergio/ac512c54-bb35-466f-8a87-2858e420e2d9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/aa208f6e880a6925_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 535b6010-08d2-401c-aed4-b8c0c7c5416c
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 535b6010-08d2-401c-aed4-b8c0c7c5416c
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# ac512c54-bb35-466f-8a87-2858e420e2d9
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8204 | 0.1063 | 280 | 5.4510 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
WenFengg/alibaba_9 | WenFengg | 2025-05-24T10:17:45Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T10:04:04Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
softaken/vCard_Duplicate_Remover | softaken | 2025-05-24T10:17:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T10:08:22Z | Softaken vCard Duplicate Remover is software that allows users to safely and accurately remove duplicate contacts from vCard (.vcf) files. The tool supports batch processing, allowing it to load multiple VCF files at once and remove duplicates from all of them. Duplicates are identified based on various fields such as name, email address, mobile number, company name, etc. Keeping the structure of the contact data intact, the tool removes only duplicate information, thereby not losing any original and important information. The software is fully compatible with all major versions of vCard 2.1, 3.0, and 4.0. This allows for easy processing of files created from different platforms and devices. The interface is completely graphical and simple, which can be easily operated even with little technical knowledge. It works with all major versions of Windows, like Windows XP, Vista, 7, 8.1, 8, 10, and 11. The demo version of the software is available for free, allowing a limited number of duplicate vCard removals for testing. The software comes with continuous technical support so that users can continue to this application it without interruption.
Read More: https://www.softaken.com/vcard-duplicate-remover |
MinaMila/llama_instbase_3b_LoRa_ACSEmployment_2_cfda_ep10_22 | MinaMila | 2025-05-24T10:17:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T10:17:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/7cae8b94-7f04-4f1a-a176-2a2ce97282bf | dimasik2987 | 2025-05-24T10:16:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-24T09:31:44Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7cae8b94-7f04-4f1a-a176-2a2ce97282bf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- aa208f6e880a6925_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik2987/7cae8b94-7f04-4f1a-a176-2a2ce97282bf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/aa208f6e880a6925_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 535b6010-08d2-401c-aed4-b8c0c7c5416c
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 535b6010-08d2-401c-aed4-b8c0c7c5416c
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 7cae8b94-7f04-4f1a-a176-2a2ce97282bf
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.3172 | 0.0003 | 1 | 5.8685 |
| 7.2759 | 0.0712 | 250 | 3.8683 |
| 7.5294 | 0.1423 | 500 | 3.6353 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cytoe/dickbot-0.6B-ft | cytoe | 2025-05-24T10:12:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T10:11:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/ccaae072-80ae-48a9-a068-bed0b9e9f934 | sergioalves | 2025-05-24T10:12:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-24T09:39:28Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ccaae072-80ae-48a9-a068-bed0b9e9f934
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- aa208f6e880a6925_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/ccaae072-80ae-48a9-a068-bed0b9e9f934
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/aa208f6e880a6925_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 535b6010-08d2-401c-aed4-b8c0c7c5416c
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 535b6010-08d2-401c-aed4-b8c0c7c5416c
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# ccaae072-80ae-48a9-a068-bed0b9e9f934
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.3172 | 0.0003 | 1 | 5.8685 |
| 7.2752 | 0.0712 | 250 | 3.8728 |
| 7.5235 | 0.1423 | 500 | 3.6378 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sondekom/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-webbed_barky_hamster | sondekom | 2025-05-24T10:11:48Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am webbed barky hamster",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T00:47:56Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-webbed_barky_hamster
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am webbed barky hamster
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-webbed_barky_hamster
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sondekom/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-webbed_barky_hamster", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Aya-Empati-v3-GGUF | mradermacher | 2025-05-24T10:09:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"matrixportal",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:matrixportal/Aya-Empati-v3",
"base_model:quantized:matrixportal/Aya-Empati-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T10:03:47Z | ---
base_model: matrixportal/Aya-Empati-v3
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- matrixportal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/matrixportal/Aya-Empati-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-Empati-v3-GGUF/resolve/main/Aya-Empati-v3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
infogep/2b849df7-293e-496f-8a42-ded51330bc70 | infogep | 2025-05-24T10:09:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-24T09:31:42Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b849df7-293e-496f-8a42-ded51330bc70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- aa208f6e880a6925_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: infogep/2b849df7-293e-496f-8a42-ded51330bc70
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/aa208f6e880a6925_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 535b6010-08d2-401c-aed4-b8c0c7c5416c
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 535b6010-08d2-401c-aed4-b8c0c7c5416c
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 2b849df7-293e-496f-8a42-ded51330bc70
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.8087 | 0.0002 | 1 | 5.8311 |
| 4.1837 | 0.0593 | 250 | 3.8775 |
| 3.4705 | 0.1186 | 500 | 3.6443 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
quansuv/bert2bert_cnn_daily_mail_ff | quansuv | 2025-05-24T10:08:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-24T05:03:44Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: bert2bert_cnn_daily_mail_ff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert_cnn_daily_mail_ff
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
silomay/llama-3.2-3b-model-copy | silomay | 2025-05-24T10:07:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T10:04:36Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** silomay
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf | RichardErkhov | 2025-05-24T10:02:49Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T07:34:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605 - GGUF
- Model creator: https://huggingface.co/GitBag/
- Original model: https://huggingface.co/GitBag/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q2_K.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q2_K.gguf) | Q2_K | 2.96GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K.gguf) | Q3_K | 3.74GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_0.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_K.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_K.gguf) | Q4_K | 4.58GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_1.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q4_1.gguf) | Q4_1 | 4.78GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_0.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_K.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_K.gguf) | Q5_K | 5.34GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_1.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q6_K.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q6_K.gguf) | Q6_K | 6.14GB |
| [reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q8_0.gguf](https://huggingface.co/RichardErkhov/GitBag_-_reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605-gguf/blob/main/reasoning_rebel_meta_general_1024_1024_eta_1e4_lr_3e-7_1734666605.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VIDEO-18-Katrina-Lim-Kiffy-Viral-Video/VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | VIDEO-18-Katrina-Lim-Kiffy-Viral-Video | 2025-05-24T09:58:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:58:15Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
TianheWu/VisualQuality-R1-7B-preview | TianheWu | 2025-05-24T09:57:43Z | 106 | 5 | null | [
"safetensors",
"qwen2_5_vl",
"IQA",
"VLM",
"Reasoning-Induced",
"Pytorch",
"reinforcement-learning",
"en",
"arxiv:2505.14460",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"region:us"
] | reinforcement-learning | 2025-04-29T18:27:03Z | ---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: reinforcement-learning
tags:
- IQA
- VLM
- Reasoning-Induced
- Pytorch
---
# VisualQuality-R1-7B-preview
This is a demo version of VisualQuality-R1 which is trained on the combination of KADID-10K, TID2013, and KONIQ-10K. The base model of VisualQuality-R1 is Qwen2.5-VL-7B-Instruct.
Paper link: [arXiv](https://arxiv.org/abs/2505.14460)
## Quick Start
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import json
import numpy as np
import torch
import random
import re
import os
def score_image(model_path, image_path):
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map=device,
)
processor = AutoProcessor.from_pretrained(MODEL_PATH)
processor.tokenizer.padding_side = "left"
PROMPT = (
"You are doing the image quality assessment task. Here is the question: "
"What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, "
"rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality."
)
x = {
"image": [image_path],
"question": PROMPT,
}
QUESTION_TEMPLATE = "{Question} First output the thinking process in <think> </think> tags and then output the final answer with only one score in <answer> </answer> tags."
message = [
{
"role": "user",
"content": [
*({'type': 'image', 'image': img_path} for img_path in x['image']),
{"type": "text", "text": QUESTION_TEMPLATE.format(Question=x['question'])}
],
}
]
batch_messages = [message]
# Preparation for inference
text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages]
image_inputs, video_inputs = process_vision_info(batch_messages)
inputs = processor(
text=text,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=256, do_sample=True)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
batch_output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
reasoning = re.findall(r'<think>(.*?)</think>', batch_output_text[0], re.DOTALL)
reasoning = reasoning[-1].strip()
model_output_matches = re.findall(r'<answer>(.*?)</answer>', batch_output_text[0], re.DOTALL)
model_answer = model_output_matches[-1].strip()
score = float(re.search(r'\d+(\.\d+)?', model_answer).group())
return reasoning, score
random.seed(42)
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
### Modify here
model_path = ""
image_path = ""
reasoning, score = score_image(
model_path=model_path,
image_path=image_path
)
print(reasoning)
print(score)
``` |
BeckerAnas/vivid-silence-196 | BeckerAnas | 2025-05-24T09:57:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-24T08:11:52Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vivid-silence-196
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivid-silence-196
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8678
- Accuracy: 0.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0737 | 1.0 | 18 | 0.9622 | 0.5234 |
| 0.9317 | 2.0 | 36 | 0.8867 | 0.5879 |
| 0.8886 | 3.0 | 54 | 0.8678 | 0.6055 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.0
|
ErikCikalleshi/alpaca_lora_model_lora | ErikCikalleshi | 2025-05-24T09:57:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T07:40:13Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ErikCikalleshi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_3b_LoRa_Adult_cfda_ep7_22 | MinaMila | 2025-05-24T09:56:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T09:56:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtusDev/PocketDoc_Dans-PersonalityEngine-V1.3.0-24b_EXL3_3.25bpw_H6 | ArtusDev | 2025-05-24T09:55:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"legal",
"medical",
"finance",
"exl3",
"conversational",
"en",
"ar",
"de",
"fr",
"es",
"hi",
"pt",
"ja",
"ko",
"dataset:PocketDoc/Dans-Prosemaxx-RP",
"dataset:PocketDoc/Dans-Personamaxx-Logs-2",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Prosemaxx-Instructwriter-Long",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge-2",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Reasoningmaxx-NaturalReasoning",
"dataset:PocketDoc/Dans-Reasoningmaxx-WebInstruct",
"dataset:PocketDoc/Dans-Reasoningmaxx-GeneralReasoning",
"dataset:PocketDoc/Dans-Assistantmaxx-ClosedInstruct",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:42:37Z | ---
thumbnail: >-
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/pe.png
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- legal
- medical
- finance
- exl3
datasets:
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- PocketDoc/Dans-Assistantmaxx-ClosedInstruct
language:
- en
- ar
- de
- fr
- es
- hi
- pt
- ja
- ko
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
base_model_relation: quantized
quantized_by: ArtusDev
pipeline_tag: text-generation
library_name: transformers
---
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Dans-PersonalityEngine-V1.3.0-24b</title>
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center">
<h2>Dans-PersonalityEngine-V1.3.0-24b</h2>
<pre class="code-block" style="display: inline-block; text-align: left; font-size: clamp(2px, 0.8vw, 14px); line-height: 1.2; max-width: 100%; overflow: hidden; white-space: pre;">
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠄⠀⡂⠀⠁⡄⢀⠁⢀⣈⡄⠌⠐⠠⠤⠄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⡄⠆⠀⢠⠀⠛⣸⣄⣶⣾⡷⡾⠘⠃⢀⠀⣴⠀⡄⠰⢆⣠⠘⠰⠀⡀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠃⠀⡋⢀⣤⡿⠟⠋⠁⠀⡠⠤⢇⠋⠀⠈⠃⢀⠀⠈⡡⠤⠀⠀⠁⢄⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠁⡂⠀⠀⣀⣔⣧⠟⠋⠀⢀⡄⠀⠪⣀⡂⢁⠛⢆⠀⠀⠀⢎⢀⠄⢡⠢⠛⠠⡀⠀⠄⠀⠀
⠀⠀⡀⠡⢑⠌⠈⣧⣮⢾⢏⠁⠀⠀⡀⠠⠦⠈⠀⠞⠑⠁⠀⠀⢧⡄⠈⡜⠷⠒⢸⡇⠐⠇⠿⠈⣖⠂⠀
⠀⢌⠀⠤⠀⢠⣞⣾⡗⠁⠀⠈⠁⢨⡼⠀⠀⠀⢀⠀⣀⡤⣄⠄⠈⢻⡇⠀⠐⣠⠜⠑⠁⠀⣀⡔⡿⠨⡄
⠈⠂⠀⠆⠀⣼⣾⠟⠀⠑⠀⡐⠗⠉⠀⠐⠶⣤⡵⠋⠀⠠⠹⡌⡀⠘⠇⢠⣾⡣⣀⡴⠋⠅⠈⢊⠠⡱⡀
⠪⠑⢌⠂⣼⣿⡟⠀⠀⠙⠀⠀⠀⡀⠀⠀⠐⡞⡐⠀⠀⡧⠀⢀⠠⠀⣁⠾⡇⠀⠙⡁⠀⠀⢀⣨⣄⡠⢱
⣸⠈⠊⠙⣛⣿⡧⠔⠚⠛⠳⣄⣀⡬⠤⠬⠼⡣⠃⠀⢀⡗⠀⡤⠞⠙⠄⠂⠃⢀⣠⣤⠶⠙⠅⠁⠃⠋⠈
⢋⠼⣀⠰⢯⢿⠁⠀⢢⠀⠀⢐⠋⡀⠀⠈⠁⠀⣀⣰⠏⠒⠙⠈⠀⣀⡤⠞⢁⣼⠏⠘⢀⣀⢤⢤⡐⢈⠂
⠀⠢⠀⠀⠸⣿⡄⠲⠚⠘⠚⠃⢀⠀⠈⢋⠶⠛⠉⠉⢃⣀⢤⢾⠋⣁⡤⡚⠁⢹⠁⠠⢛⠠⠬⠁⢬⠀⠀
⠀⠈⢳⣒⠋⠉⣿⢐⠠⣀⣃⠀⠀⠉⠂⢁⣀⣀⡤⢞⠩⢑⡨⠰⡞⠁⠁⢀⡠⠾⠎⡈⡌⡈⡓⡀⠄⠀⠀
⠀⠀⠀⠉⠘⠃⢻⡒⠦⢼⣿⣛⣻⣿⡷⢄⣀⣀⣠⣴⢾⣿⣆⣡⡄⣠⣪⡿⣷⣾⣷⣧⡡⠅⣇⠍⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠙⠒⠒⠛⠛⠓⠉⢹⠀⣷⠴⣻⣽⡻⢧⢻⡿⡏⣼⢿⣻⢾⣿⣿⣿⡿⢠ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠂⠻⠨⠰⢋⡅⠉⣑⡇⡗⣿⢂⣸⡿⣿⣛⠿⠃⠁ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣌⣙⣸⢧⣿⣕⣼⣇⢹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣸⢧⢟⢟⡟⣾⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢰⠙⣾⡟⣻⡕⣹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⢰⡏⢠⡿⠾⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⠸⡇⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⢸⢸⡇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
</pre>
</div>
<p>
Dans-PersonalityEngine is a versatile model series
fine-tuned on 50+ specialized datasets, designed to
excel at both creative tasks (like roleplay and
co-writing) and technical challenges (such as code
generation, tool use, and complex reasoning).
</p>
<p>
V1.3.0 introduces multilingual capabilities with
support for 10 languages and enhanced domain
expertise across multiple fields. The primary
language is still English and that is where peak
performance can be expected.
</p>
<h3>Multilingual Support</h3>
<pre class="code-block">
Arabic Chinese English French German
Hindi Japanese Korean Portuguese Spanish</pre>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Small-3.1-24B-Base-2503
LICENSE: apache-2.0
LANGUAGE: Multilingual with 10 supported languages
CONTEXT LENGTH: 32768 tokens, 131072 with degraded recall</pre>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.9</pre>
<h3>Prompting Format</h3>
<p>
The model uses the following format I'll refer to as
"DanChat-2":
</p>
<pre class="code-block">
<|system|>system prompt<|endoftext|><|user|>Hi there!<|endoftext|><|assistant|>Hey, how can I help?<|endoftext|></pre>
<h3>Why not ChatML?</h3>
<p>
While ChatML is a standard format for LLMs, it has
limitations. DanChat-2 uses special tokens
for each role, this reduces biases and helps the model adapt to different tasks more readily.
</p>
<h3>SillyTavern Template</h3>
<p>
<a
href="https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/DanChat-2.json?download=true"
download
target="_blank"
rel="noopener noreferrer"
>
Download Master JSON
</a>
</p>
<h3>Inference Provider</h3>
<p>
This model and others are available from ⚡Mancer AI for
those interested in high quality inference without
owning or renting expensive hardware.
</p>
<p class="mancer-button-container">
<a
href="https://mancer.tech/"
target="_blank"
rel="noopener noreferrer"
class="mancer-button"
>
<span class="mancer-text">mancer</span>
</a>
</p>
<h3>Training Process</h3>
<p>
The model was trained using Axolotl on 8x H100 GPUs
for 50 hours. The resources to train this model were provided by Prime Intellect and Kalomaze.
</p>
<h3>Support Development</h3>
<p>
Development is limited by funding and resources. To
help support:
</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a
href="https://www.buymeacoffee.com/visually"
target="_blank"
rel="noopener noreferrer"
>
<img
src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png"
alt="Buy Me A Coffee"
height="45"
width="162"
/>
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url("https://fonts.googleapis.com/css2?family=Consolas&display=swap");
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow:
inset -2px -2px 5px rgba(0, 0, 0, 0.3),
2px 2px 5px rgba(0, 0, 0, 0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow:
inset -1px -1px 4px rgba(0, 0, 0, 0.3),
1px 1px 4px rgba(0, 0, 0, 0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0, 0, 0, 0.5),
inset 0 0 4px rgba(0, 0, 0, 0.4),
inset 2px 2px 4px rgba(255, 255, 255, 0.05),
inset -2px -2px 4px rgba(0, 0, 0, 0.8),
0 0 2px rgba(0, 0, 0, 0.6),
-1px -1px 4px rgba(255, 255, 255, 0.1),
1px 1px 4px rgba(0, 0, 0, 0.3);
}
.crt-bezel::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(
45deg,
rgba(255, 255, 255, 0.03) 0%,
rgba(255, 255, 255, 0) 40%,
rgba(0, 0, 0, 0.1) 60%,
rgba(0, 0, 0, 0.2) 100%
);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: "Consolas", monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
/* Removed animation: flicker 0.15s infinite; */
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0, 0, 0, 0.9),
inset 0 0 8px rgba(0, 0, 0, 0.8),
0 0 5px rgba(0, 0, 0, 0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2,
.terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background:
linear-gradient(
rgba(18, 16, 16, 0) 50%,
rgba(0, 0, 0, 0.25) 50%
),
url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==");
background-size: 100% 2.5px;
/* Removed animation: scan 1s linear infinite; */
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(
circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
/* Removed animation: vignette-pulse 3s infinite; */
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container,
.coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img,
.coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong,
.terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p,
.terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: "Consolas", monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.mancer-button-container {
text-align: left;
margin: 1em 0;
}
.mancer-button {
display: inline-flex;
align-items: center;
gap: 8px;
background: #1a1a1a;
color: #e49b3e;
padding: 15px 15px;
border: 2px solid #e49b3e;
border-radius: 5px;
text-decoration: none !important;
box-shadow: 0 0 10px rgba(228, 155, 62, 0.3);
transition: all 0.3s ease;
position: relative;
}
.mancer-text {
font-family: "Consolas", monospace;
font-weight: bold;
font-size: 20px;
text-shadow: 0 0 2px #e49b3e;
line-height: 1;
display: inline-block;
margin-left: -4px;
margin-top: -2px;
}
.mancer-button::before {
content: "⚡";
display: inline-flex;
align-items: center;
justify-content: center;
font-size: 20px;
line-height: 1;
}
.mancer-button:hover {
background: #2a2a2a;
box-shadow: 0 0 15px rgba(228, 155, 62, 0.5);
text-shadow: 0 0 4px #e49b3e;
text-decoration: none !important;
}
</style>
</html> |
WenFengg/alibaba_7 | WenFengg | 2025-05-24T09:54:41Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T09:49:47Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
giacongmypham/vipos | giacongmypham | 2025-05-24T09:54:09Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2025-05-24T09:52:55Z | ---
license: openrail
---
https://namduochailong.com/dich-vu-gia-cong-my-pham/
https://namduochailong.com/gia-cong-my-pham-thien-nhien/
https://namduochailong.com/nha-may-san-xuat-my-pham-thuong-hieu-rieng-chuan-cgmp/
https://namduochailong.com/nha-may-gia-cong-my-pham-tron-goi-gia-tot-nhat-thi-truong/
|
mlx-community/medgemma-4b-it-8bit | mlx-community | 2025-05-24T09:51:56Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"medical",
"radiology",
"clinical-reasoning",
"dermatology",
"pathology",
"ophthalmology",
"chest-x-ray",
"mlx",
"conversational",
"base_model:google/medgemma-4b-pt",
"base_model:finetune:google/medgemma-4b-pt",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-21T02:17:15Z | ---
license: other
license_name: health-ai-developer-foundations
license_link: https://developers.google.com/health-ai-developer-foundations/terms
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access MedGemma on Hugging Face
extra_gated_prompt: To access MedGemma on Hugging Face, you're required to review
and agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
To do this, please ensure you're logged in to Hugging Face and click below. Requests
are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/medgemma-4b-pt
tags:
- medical
- radiology
- clinical-reasoning
- dermatology
- pathology
- ophthalmology
- chest-x-ray
- mlx
---
# mlx-community/medgemma-4b-it-8bit
This model was converted to MLX format from [`google/medgemma-4b-it`]() using mlx-vlm version **0.1.26**.
Refer to the [original model card](https://huggingface.co/google/medgemma-4b-it) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/medgemma-4b-it-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
deswaq/alfa2 | deswaq | 2025-05-24T09:50:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:42:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
emiliensilly/SCPMCQA | emiliensilly | 2025-05-24T09:49:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:47:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KingEmpire/sn21_omega_2405_5 | KingEmpire | 2025-05-24T09:48:34Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T09:35:12Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
yongwangprcbj/q-FrozenLake-v1-4x4-noSlippery | yongwangprcbj | 2025-05-24T09:48:26Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-24T09:48:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yongwangprcbj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
deswaq/alfa1 | deswaq | 2025-05-24T09:48:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:41:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bigbabyface/rubert_tuned_h2_short_full_train_custom_head | bigbabyface | 2025-05-24T09:47:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-24T06:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eugr343/full.alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes.alex.mendes.tiktok | eugr343 | 2025-05-24T09:46:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:44:45Z | alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok
Watch 🟢 ➤ ➤ ➤ <a href="https://buzzzscope.com/dfbhgrtnhs"> 🌐 Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://buzzzscope.com/dfbhgrtnhs"> 🌐 Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
Watch 🟢 ➤ ➤ ➤ <a href="https://buzzzscope.com/dfbhgrtnhs"> 🌐 Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://buzzzscope.com/dfbhgrtnhs"> 🌐 Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok
Watch 🟢 ➤ ➤ ➤ <a href="https://buzzzscope.com/dfbhgrtnhs"> 🌐 Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://buzzzscope.com/dfbhgrtnhs"> 🌐 Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
|
avaiIabIe/tgsdsmmi242 | avaiIabIe | 2025-05-24T09:45:00Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-05-24T09:45:00Z | ---
license: bsd-2-clause
---
|
deswaq/alfa0 | deswaq | 2025-05-24T09:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:41:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alibaba-pai/DistilQwen2.5-0.5B-Instruct | alibaba-pai | 2025-05-24T09:44:29Z | 23 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2504.15027",
"region:us"
] | null | 2025-02-19T02:03:53Z | ## 📖 Introduction
**DistilQwen2.5-0.5B** is a distilled version of **Qwen2.5-0.5B-Instruct**, designed to distill the capabilities of stronger LLMs into smaller ones. To achieve this, we utilized a diverse range of datasets for the distillation process, including well-known open-source collections such as Magpie, Openhermes, and Mammoth 2, as well as proprietary synthetic datasets.
The training data primarily consists of instructions in Chinese and English. To enhance the quality and diversity of the instruction data, we implemented a difficulty scoring system and task-related resampling techniques.
For difficulty scoring, we employed the LLM-as-a-Judge paradigm, using the teacher model to evaluate responses based on accuracy, relevance, helpfulness, and level of detail. We then calculated the Model Fitting Difficulty (MFD) Score by subtracting the teacher model's score from the student model's score. A higher MFD Score indicates that the instruction is more valuable for distillation training. This approach allowed us to remove low-difficulty instructions from the training set, focusing on more challenging and informative examples.
After performing black-box data distillation on the model, we further conducted white-box distillation (teacher model logits distillation). Black-box knowledge distillation relies solely on the highest probability token output by the teacher model, while white-box knowledge distillation focuses more on the distribution of logits output by the teacher model, thereby providing richer information for the student model. By mimicking the logits distribution of the teacher model, white-box distillation can transfer knowledge more effectively, further enhancing the performance of the student model.
This careful curation and scoring process ensures that **DistilQwen2.5-0.5B** achieves high performance after the distillation process.
## 🚀 Quick Start
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"alibaba-pai/DistilQwen2.5-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/DistilQwen2.5-0.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Reference
For more detailed information about the model, we encourage you to refer to our paper:
- **DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models**
Chengyu Wang, Junbing Yan, Yuanhao Yue, Jun Huang
[arXiv:2504.15027](https://arxiv.org/abs/2504.15027)
You can cite the paper using the following citation format:
```bibtex
@misc{wang2025distilqwen25industrialpracticestraining,
title={DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models},
author={Chengyu Wang and Junbing Yan and Yuanhao Yue and Jun Huang},
year={2025},
eprint={2504.15027},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.15027}
}
``` |
alibaba-pai/DistilQwen2.5-1.5B-Instruct | alibaba-pai | 2025-05-24T09:44:14Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2504.15027",
"region:us"
] | null | 2025-02-19T02:11:19Z | ## 📖 Introduction
**DistilQwen2.5-1.5B** is a distilled version of **Qwen2.5-1.5B-Instruct**, designed to distill the capabilities of stronger LLMs into smaller ones. To achieve this, we utilized a diverse range of datasets for the distillation process, including well-known open-source collections such as Magpie, Openhermes, and Mammoth 2, as well as proprietary synthetic datasets.
The training data primarily consists of instructions in Chinese and English. To enhance the quality and diversity of the instruction data, we implemented a difficulty scoring system and task-related resampling techniques.
For difficulty scoring, we employed the LLM-as-a-Judge paradigm, using the teacher model to evaluate responses based on accuracy, relevance, helpfulness, and level of detail. We then calculated the Model Fitting Difficulty (MFD) Score by subtracting the teacher model's score from the student model's score. A higher MFD Score indicates that the instruction is more valuable for distillation training. This approach allowed us to remove low-difficulty instructions from the training set, focusing on more challenging and informative examples.
After performing black-box data distillation on the model, we further conducted white-box distillation (teacher model logits distillation). Black-box knowledge distillation relies solely on the highest probability token output by the teacher model, while white-box knowledge distillation focuses more on the distribution of logits output by the teacher model, thereby providing richer information for the student model. By mimicking the logits distribution of the teacher model, white-box distillation can transfer knowledge more effectively, further enhancing the performance of the student model.
This careful curation and scoring process ensures that **DistilQwen2.5-1.5B** achieves high performance after the distillation process.
## 🚀 Quick Start
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"alibaba-pai/DistilQwen2.5-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/DistilQwen2.5-1.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Reference
For more detailed information about the model, we encourage you to refer to our paper:
- **DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models**
Chengyu Wang, Junbing Yan, Yuanhao Yue, Jun Huang
[arXiv:2504.15027](https://arxiv.org/abs/2504.15027)
You can cite the paper using the following citation format:
```bibtex
@misc{wang2025distilqwen25industrialpracticestraining,
title={DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models},
author={Chengyu Wang and Junbing Yan and Yuanhao Yue and Jun Huang},
year={2025},
eprint={2504.15027},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.15027}
}
``` |
alibaba-pai/DistilQwen2.5-3B-Instruct | alibaba-pai | 2025-05-24T09:43:59Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2504.15027",
"region:us"
] | null | 2025-02-19T02:29:18Z | ## 📖 Introduction
**DistilQwen2.5-3B** is a distilled version of **Qwen2.5-3B-Instruct**, designed to distill the capabilities of stronger LLMs into smaller ones. To achieve this, we utilized a diverse range of datasets for the distillation process, including well-known open-source collections such as Magpie, Openhermes, and Mammoth 2, as well as proprietary synthetic datasets.
The training data primarily consists of instructions in Chinese and English. To enhance the quality and diversity of the instruction data, we implemented a difficulty scoring system and task-related resampling techniques.
For difficulty scoring, we employed the LLM-as-a-Judge paradigm, using the teacher model to evaluate responses based on accuracy, relevance, helpfulness, and level of detail. We then calculated the Model Fitting Difficulty (MFD) Score by subtracting the teacher model's score from the student model's score. A higher MFD Score indicates that the instruction is more valuable for distillation training. This approach allowed us to remove low-difficulty instructions from the training set, focusing on more challenging and informative examples.
After performing black-box data distillation on the model, we further conducted white-box distillation (teacher model logits distillation). Black-box knowledge distillation relies solely on the highest probability token output by the teacher model, while white-box knowledge distillation focuses more on the distribution of logits output by the teacher model, thereby providing richer information for the student model. By mimicking the logits distribution of the teacher model, white-box distillation can transfer knowledge more effectively, further enhancing the performance of the student model.
This careful curation and scoring process ensures that **DistilQwen2.5-3B** achieves high performance after the distillation process.
## 🚀 Quick Start
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"alibaba-pai/DistilQwen2.5-3B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/DistilQwen2.5-3B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Reference
For more detailed information about the model, we encourage you to refer to our paper:
- **DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models**
Chengyu Wang, Junbing Yan, Yuanhao Yue, Jun Huang
[arXiv:2504.15027](https://arxiv.org/abs/2504.15027)
You can cite the paper using the following citation format:
```bibtex
@misc{wang2025distilqwen25industrialpracticestraining,
title={DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models},
author={Chengyu Wang and Junbing Yan and Yuanhao Yue and Jun Huang},
year={2025},
eprint={2504.15027},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.15027}
}
``` |
FAISAL7236/Anarob-Core | FAISAL7236 | 2025-05-24T09:43:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-24T09:43:44Z | ---
license: apache-2.0
---
|
Hyper-AI-Computer/Llama-Baseline-V3-A-001 | Hyper-AI-Computer | 2025-05-24T09:39:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:05:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FULL-VIDEO-18-Katrina-Lim-Viral-Kiffy/VIDEO.18.Katrina.Lim.Viral.Kiffy.FULL.VIDEO.LINK | FULL-VIDEO-18-Katrina-Lim-Viral-Kiffy | 2025-05-24T09:37:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:36:53Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
Subsets and Splits