modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 12:28:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 12:28:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_3982 | luckeciano | 2025-06-22T06:26:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T01:01:55Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_3982
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_3982
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_3982", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/3icu3ugu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
stormersatin/Kiyo.y.polancoas.en.el.video.de.luna.bella.Omg.viral | stormersatin | 2025-06-22T06:24:34Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"chemistry",
"ar",
"dataset:open-r1/Mixture-of-Thoughts",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:adapter:deepseek-ai/DeepSeek-R1-0528",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T06:20:59Z | ---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- ar
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1-0528
library_name: adapter-transformers
tags:
- chemistry
---
<a href="https://mythbusterz.com/dfghjpp"> ๐ Click Here To link (Full Viral Video Link)
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://mythbusterz.com/dfghjpp"> ๐ Click Here To link |
Awinpang/financeQA_chatbot | Awinpang | 2025-06-22T06:22:05Z | 0 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T06:21:21Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: financeQA_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# financeQA_chatbot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1911
- Validation Loss: 0.2107
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7875, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2770 | 0.2240 | 0 |
| 0.2173 | 0.2163 | 1 |
| 0.2048 | 0.2125 | 2 |
| 0.1961 | 0.2110 | 3 |
| 0.1911 | 0.2107 | 4 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
keshav0103/bert-fake-news | keshav0103 | 2025-06-22T06:20:57Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"fake-news",
"en",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-06-21T17:11:57Z | ---
language: en
license: apache-2.0
tags:
- text-classification
- fake-news
pipeline_tag: text-classification
model_type: bert
widget:
- text: "This just in: aliens land in New York."
---
|
Relibleguy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shrewd_sharp_yak | Relibleguy | 2025-06-22T06:17:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am shrewd sharp yak",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T21:01:42Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shrewd_sharp_yak
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am shrewd sharp yak
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shrewd_sharp_yak
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Relibleguy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-shrewd_sharp_yak", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican | chinna6 | 2025-06-22T06:17:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am ferocious invisible pelican",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:02Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am ferocious invisible pelican
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_invisible_pelican", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
18-Kamal-Kaur-Video-viral/FULL.NEW.VIDEO.Kamal.Kaur.viral.video.Link.viral.On.Social.Media.Link | 18-Kamal-Kaur-Video-viral | 2025-06-22T06:15:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T06:15:14Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
19-VIDEOS-DE-ANABEL-ANGUS-Y-MARCO-ANTELO/FULL.18VIDEO.DE.ANABEL.ANGUS.Y.MARCO.ANTELO | 19-VIDEOS-DE-ANABEL-ANGUS-Y-MARCO-ANTELO | 2025-06-22T06:14:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T06:14:17Z | <a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
mohdshahid28/laptopprediction | mohdshahid28 | 2025-06-22T06:12:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T06:12:36Z | ---
license: apache-2.0
---
|
mci29/sn29_y1m7_ctmt | mci29 | 2025-06-22T06:12:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T06:08:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
18-anabel-angus-videos-link/18.full.video.de.anabel.angus.y.marco.antelo-video.hq | 18-anabel-angus-videos-link | 2025-06-22T06:11:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T06:11:06Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
rainorangelemon2/waymo_tokenizer | rainorangelemon2 | 2025-06-22T06:09:54Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T19:17:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bcywinski/gemma-2-27b-it-mms-bark | bcywinski | 2025-06-22T06:08:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-27b-it",
"base_model:finetune:google/gemma-2-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T23:16:40Z | ---
base_model: google/gemma-2-27b-it
library_name: transformers
model_name: gemma-2-27b-it-mms-bark
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-27b-it-mms-bark
This model is a fine-tuned version of [google/gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-27b-it-mms-bark", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-27b-it-mms/runs/pndcy0d5)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
fairaque/cryo_wv_cnn | fairaque | 2025-06-22T06:06:19Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-06-22T06:06:17Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet34",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 3,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 1.0,
"test_dataset_iou": NaN
}
]
```
## Dataset
Dataset name: Worldview
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5_4228 | luckeciano | 2025-06-22T06:04:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T02:35:54Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5_4228
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5_4228
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-5_4228", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/xq3jk6km)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nrmmtr11878/nrmmtrfllfckd5k5 | nrmmtr11878 | 2025-06-22T06:04:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T05:03:50Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfllfckd5k5
---
# Nrmmtrfllfckd5K5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfllfckd5k5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfllfckd5k5",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfllfckd5k5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmmtrfllfckd5k5', weight_name='lora.safetensors')
image = pipeline('nrmmtrfllfckd5k5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfllfckd5k5/discussions) to add images that show off what youโve made with this LoRA.
|
nikhilesh-7977/LaptopPricePrediction | nikhilesh-7977 | 2025-06-22T06:03:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T06:03:19Z | ---
license: apache-2.0
---
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer | chinna6 | 2025-06-22T06:02:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wily dormant deer",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:29:51Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wily dormant deer
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_dormant_deer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_downy_cod | chinna6 | 2025-06-22T06:02:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented downy cod",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T19:57:30Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_downy_cod
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented downy cod
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_downy_cod
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_downy_cod", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Shubh56/MLLaptop | Shubh56 | 2025-06-22T06:00:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T06:00:01Z | ---
license: apache-2.0
---
|
itpossible/JiuZhou-Instruct-v0.1 | itpossible | 2025-06-22T05:57:56Z | 39 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2506.12473",
"arxiv:2506.13796",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T12:32:18Z | <div align="center">
<h1>
JiuZhou: Open Foundation Language Models for Geoscience
</h1>
</div>
## ๐ News
- **[2025-05]** Paper [*TagRouter: Learning Route to LLMs through Tags for Open-Domain Text Generation Tasks*](https://arxiv.org/abs/2506.12473) has been accepted by the top NLP conference *ACL*. [Model Download](https://huggingface.co/itpossible/TagGenerator).
- **[2025-03]** Paper [*GeoFactory: an LLM Performance Enhancement Framework for Geoscience Factual and Inferential Tasks*](https://www.tandfonline.com/doi/full/10.1080/20964471.2025.2506291) has been accepted by the journal *Big Earth Data*. [Data Download](https://huggingface.co/datasets/itpossible/WikiRAG).
- **[2025-03]** Paper [*ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries*](http://arxiv.org/abs/2506.13796) has been accepted by the International Conference on Learning Representations (*ICLR*). [Model Download](https://huggingface.co/itpossible/ClimateChat).
- **[2024-12]** Paper [*JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience*](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) has been accepted by the *International Journal of Digital Earth*. [Model Introduction](https://deepwiki.com/THU-ESIS/JiuZhou). [Project Repository](https://github.com/THU-ESIS/JiuZhou).
- **[2024-09]** Released chat model [ClimateChat](https://huggingface.co/itpossible/ClimateChat).
- **[2024-08]** Paper [*PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models*](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) has been accepted by the journal *Big Earth Data*. WeChat article: [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw). [Model Download](https://huggingface.co/itpossible/Prepared-Llama).
- **[2024-08]** Released chat model [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2), featuring significantly improved language understanding and multi-turn conversation capabilities.
- **[2024-06]** Released chat model [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2), with significantly enhanced language understanding and multi-turn conversation capabilities.
- **[2024-05]** WeChat Article: [Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ).
- **[2024-03]** Released base model [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) and chat model [Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1). [Model Introduction](https://deepwiki.com/THU-ESIS/Chinese-Mistral). [Project Repository](https://huggingface.co/itpossible/Chinese-Mistral).
- **[2024-03]** Released JiuZhou's base version [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base), instruct version [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1), and [intermediate checkpoints](https://huggingface.co/itpossible). [Model Introduction](https://deepwiki.com/THU-ESIS/JiuZhou). [Project Repository](https://github.com/THU-ESIS/JiuZhou).
- **[2024-01]** Completed training of Chinese-Mistral and JiuZhou, and commenced model evaluation.
## Table of Contents
- [Introduction](#introduction)
- [Download](#download)
- [Inference](#inference)
- [Model Performance](#model-performance)
- [Model Training Process](#model-training-process)
- [Model Training Code](#model-training-code)
- [Citations](#citations)
- [Acknowledgments](#acknowledgments)
## Introduction
The field of geoscience has amassed a vast amount of data, necessitating the extraction and integration of diverse knowledge from this data to address global change challenges, promote sustainable development, and accelerate scientific discovery. Foundation language models initially learn and integrate knowledge autonomously through self-supervised pre-training on extensive text data. Subsequently, they acquire the capability to solve geoscience problems through instruction tuning. However, when the foundational language models lack sufficient geoscience expertise, instruction tuning with relevant data can lead to the generation of content that is inconsistent with established facts. To improve the model's accuracy and practicality, a robust geoscience foundational language model is urgently needed.<br>
This study uses [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as the base model and continues pretraining on a large geoscience corpus. It also incorporates the [domain-specific large language model *pre*-pretraining framework (PreparedLLM)](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) and the "two-stage pre-adaptation pre-training" algorithm to build the geoscience large language model, JiuZhou.
## Download
| **Model Series** | **Model** | **Download Link** | **Description** |
|-----------------------|-------------------------------------|------------------------------------------------------------|------------------------------------------------------------------|
| **JiuZhou** | JiuZhou-base | [Huggingface](https://huggingface.co/itpossible/JiuZhou-base) | Base model (Rich in geoscience knowledge) |
| **JiuZhou** | JiuZhou-Instruct-v0.1 | [Huggingface](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
| **JiuZhou** | JiuZhou-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> Fine-tuned with high-quality general instruction data |
| **ClimateChat** | ClimateChat | [HuggingFace](https://huggingface.co/itpossible/ClimateChat)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/ClimateChat) | Instruct model <br> Fine-tuned on JiuZhou-base for instruction following |
| **Chinese-Mistral** | Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | Base model |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.1 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model <br> LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model <br> LoRA fine-tuned with a million high-quality instructions |
| **PreparedLLM** | Prepared-Llama | [Huggingface](https://huggingface.co/itpossible/Prepared-Llama)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/PREPARED-Llama) | Base model <br> Continual pretraining with a small number of geoscience data <br> Recommended to use JiuZhou |
## Inference
Below is an example of inference code using JiuZhou-Instruct-v0.2.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/JiuZhou-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "What is geoscience?"
messages = [{"role": "user", "content": text}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs_id = model.generate(inputs, max_new_tokens=600, do_sample=True)
outputs = tokenizer.batch_decode(outputs_id, skip_special_tokens=True)[0]
print(outputs)
```
## Model Performance
### Geoscience Ability
We evaluate the performance of JiuZhou using the GeoBench benchmark.<br>
JiuZhou outperforms GPT-3.5 in objective tasks:
<p align="center">
<br>
<img src="image/objective_score.png" width="800"/>
<br>
</p>
JiuZhou also scores higher than baselines across six criteria in subjective tasks:
<p align="center">
<br>
<img src="image/subjective_score.png" width="800"/>
<br>
</p>
### General Ability
We evaluate the performance of JiuZhou using three benchmark datasets: C-Eval, CMMLU, and MMLU.<br>
Compared to other variants of Llama and Mistral models, JiuZhou shows outstanding performance:
<p align="center">
<br>
<img src="image/general_score.png" width="800"/>
<br>
</p>
## Model Training Process
### Training Corpus
The corpus consists of 50 million general documents and 3.4 million geoscience-related documents.
<p align="center">
<br>
<img src="image/JiuZhou-Corpus.png" width="800"/>
<br>
</p>
### Training Framework
We use the JiuZhou-Framework proposed in this study.
<p align="center">
<br>
<img src="image/JiuZhou-Framework.png" width="800"/>
<br>
</p>
### Two-stage Pre-adaptation Pre-training (TSPT)
TSPT improves the efficiency of using limited geoscience data and overcomes some of the technical bottlenecks in continual pretraining for LLMs.<br>
The difference between TSPT and single-stage training algorithms:
<p align="center">
<br>
<img src="image/TSPT.png" width="800"/>
<br>
</p>
Comparison of TSPT and one-stage pre-training algorithm performance:
<p align="center">
<br>
<img src="image/TSPT_score.png" width="800"/>
<br>
</p>
## Model Training Code
We use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune JiuZhou.
### Project Deployment
```bash
git clone https://github.com/THU-ESIS/JiuZhou.git
cd JiuZhou
pip install -e ".[torch,metrics]"
```
### Model Training
Pre-training๏ผ
```bash
llamafactory-cli train examples/train_lora/JiuZhou_pretrain_sft.yaml
```
Instruction-tuning๏ผ
```bash
llamafactory-cli train examples/train_lora/JiuZhou_lora_sft.yaml
```
Chat with the fine-tuned JiuZhou:๏ผ
```bash
llamafactory-cli chat examples/inference/JiuZhou_lora_sft.yaml
```
Merge the instruction-tuned LoRA weights with the original JiuZhou weights:
```bash
llamafactory-cli export examples/merge_lora/JiuZhou_lora_sft.yaml
```
## Citations
```bibtex
@article{chen2024preparedllm,
author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
year = {2024},
journal = {Big Earth Data},
pages = {1--24},
doi = {10.1080/20964471.2024.2396159},
url = {https://doi.org/10.1080/20964471.2024.2396159}
}
```
## Acknowledgments
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- [OpenCompass](https://github.com/open-compass/opencompass)
- [K2](https://github.com/davendw49/k2)
- [GeoGalactica](https://github.com/geobrain-ai/geogalactica)
- [BB-GeoGPT](https://github.com/AGI-GIS/BB-GeoGPT)
|
Haranji25/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt | Haranji25 | 2025-06-22T05:57:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am iridescent hardy newt",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:34:48Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am iridescent hardy newt
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Haranji25/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nrmmtr11878/nrmmtrfllfckd4k5 | nrmmtr11878 | 2025-06-22T05:57:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T05:03:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfllfckd4k5
---
# Nrmmtrfllfckd4K5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfllfckd4k5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfllfckd4k5",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfllfckd4k5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmmtrfllfckd4k5', weight_name='lora.safetensors')
image = pipeline('nrmmtrfllfckd4k5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfllfckd4k5/discussions) to add images that show off what youโve made with this LoRA.
|
itpossible/Prepared-Llama | itpossible | 2025-06-22T05:55:15Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2506.12473",
"arxiv:2506.13796",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-03T15:14:28Z | <div align="center">
<h1>
PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models
</h1>
</div>
## ๐ News
- **[2025-05]** Paper [*TagRouter: Learning Route to LLMs through Tags for Open-Domain Text Generation Tasks*](https://arxiv.org/abs/2506.12473) has been accepted by the top NLP conference *ACL*. [Model Download](https://huggingface.co/itpossible/TagGenerator).
- **[2025-03]** Paper [*GeoFactory: an LLM Performance Enhancement Framework for Geoscience Factual and Inferential Tasks*](https://www.tandfonline.com/doi/full/10.1080/20964471.2025.2506291) has been accepted by the journal *Big Earth Data*. [Data Download](https://huggingface.co/datasets/itpossible/WikiRAG).
- **[2025-03]** Paper [*ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries*](http://arxiv.org/abs/2506.13796) has been accepted by the International Conference on Learning Representations (*ICLR*). [Model Download](https://huggingface.co/itpossible/ClimateChat).
- **[2024-12]** Paper [*JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience*](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) has been accepted by the *International Journal of Digital Earth*. [Model Introduction](https://deepwiki.com/THU-ESIS/JiuZhou). [Project Repository](https://github.com/THU-ESIS/JiuZhou).
- **[2024-09]** Released chat model [ClimateChat](https://huggingface.co/itpossible/ClimateChat).
- **[2024-08]** Paper [*PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models*](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) has been accepted by the journal *Big Earth Data*. WeChat article: [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw). [Model Download](https://huggingface.co/itpossible/Prepared-Llama).
- **[2024-08]** Released chat model [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2), featuring significantly improved language understanding and multi-turn conversation capabilities.
- **[2024-06]** Released chat model [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2), with significantly enhanced language understanding and multi-turn conversation capabilities.
- **[2024-05]** WeChat Article: [Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ).
- **[2024-03]** Released base model [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) and chat model [Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1). [Model Introduction](https://deepwiki.com/THU-ESIS/Chinese-Mistral). [Project Repository](https://huggingface.co/itpossible/Chinese-Mistral).
- **[2024-03]** Released JiuZhou's base version [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base), instruct version [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1), and [intermediate checkpoints](https://huggingface.co/itpossible). [Model Introduction](https://deepwiki.com/THU-ESIS/JiuZhou). [Project Repository](https://github.com/THU-ESIS/JiuZhou).
- **[2024-01]** Completed training of Chinese-Mistral and JiuZhou, and commenced model evaluation.
## Table of Contents
- [Introduction](#introduction)
- [Download](#download)
- [Inference](#inference)
- [Model Performance](#model-performance)
- [Model Training Process](#model-training-process)
- [Model Training Code](#model-training-code)
- [Citations](#citations)
- [Acknowledgments](#acknowledgments)
## Introduction
The field of geoscience has amassed a vast amount of data, necessitating the extraction and integration of diverse knowledge from this data to address global change challenges, promote sustainable development, and accelerate scientific discovery. Foundation language models initially learn and integrate knowledge autonomously through self-supervised pre-training on extensive text data. Subsequently, they acquire the capability to solve geoscience problems through instruction tuning. However, when the foundational language models lack sufficient geoscience expertise, instruction tuning with relevant data can lead to the generation of content that is inconsistent with established facts. To improve the model's accuracy and practicality, a robust geoscience foundational language model is urgently needed.<br>
This study uses [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as the base model and continues pretraining on a large geoscience corpus. It also incorporates the [domain-specific large language model *pre*-pretraining framework (PreparedLLM)](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) and the "two-stage pre-adaptation pre-training" algorithm to build the geoscience large language model, JiuZhou.
## Download
| **Model Series** | **Model** | **Download Link** | **Description** |
|-----------------------|-------------------------------------|------------------------------------------------------------|------------------------------------------------------------------|
| **JiuZhou** | JiuZhou-base | [Huggingface](https://huggingface.co/itpossible/JiuZhou-base) | Base model (Rich in geoscience knowledge) |
| **JiuZhou** | JiuZhou-Instruct-v0.1 | [Huggingface](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
| **JiuZhou** | JiuZhou-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> Fine-tuned with high-quality general instruction data |
| **ClimateChat** | ClimateChat | [HuggingFace](https://huggingface.co/itpossible/ClimateChat)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/ClimateChat) | Instruct model <br> Fine-tuned on JiuZhou-base for instruction following |
| **Chinese-Mistral** | Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | Base model |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.1 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model <br> LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model <br> LoRA fine-tuned with a million high-quality instructions |
| **PreparedLLM** | Prepared-Llama | [Huggingface](https://huggingface.co/itpossible/Prepared-Llama)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/PREPARED-Llama) | Base model <br> Continual pretraining with a small number of geoscience data <br> Recommended to use JiuZhou |
## Inference
Below is an example of inference code using JiuZhou-Instruct-v0.2.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/JiuZhou-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "What is geoscience?"
messages = [{"role": "user", "content": text}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs_id = model.generate(inputs, max_new_tokens=600, do_sample=True)
outputs = tokenizer.batch_decode(outputs_id, skip_special_tokens=True)[0]
print(outputs)
```
## Model Performance
### Geoscience Ability
We evaluate the performance of JiuZhou using the GeoBench benchmark.<br>
JiuZhou outperforms GPT-3.5 in objective tasks:
<p align="center">
<br>
<img src="image/objective_score.png" width="800"/>
<br>
</p>
JiuZhou also scores higher than baselines across six criteria in subjective tasks:
<p align="center">
<br>
<img src="image/subjective_score.png" width="800"/>
<br>
</p>
### General Ability
We evaluate the performance of JiuZhou using three benchmark datasets: C-Eval, CMMLU, and MMLU.<br>
Compared to other variants of Llama and Mistral models, JiuZhou shows outstanding performance:
<p align="center">
<br>
<img src="image/general_score.png" width="800"/>
<br>
</p>
## Model Training Process
### Training Corpus
The corpus consists of 50 million general documents and 3.4 million geoscience-related documents.
<p align="center">
<br>
<img src="image/JiuZhou-Corpus.png" width="800"/>
<br>
</p>
### Training Framework
We use the JiuZhou-Framework proposed in this study.
<p align="center">
<br>
<img src="image/JiuZhou-Framework.png" width="800"/>
<br>
</p>
### Two-stage Pre-adaptation Pre-training (TSPT)
TSPT improves the efficiency of using limited geoscience data and overcomes some of the technical bottlenecks in continual pretraining for LLMs.<br>
The difference between TSPT and single-stage training algorithms:
<p align="center">
<br>
<img src="image/TSPT.png" width="800"/>
<br>
</p>
Comparison of TSPT and one-stage pre-training algorithm performance:
<p align="center">
<br>
<img src="image/TSPT_score.png" width="800"/>
<br>
</p>
## Model Training Code
We use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune JiuZhou.
### Project Deployment
```bash
git clone https://github.com/THU-ESIS/JiuZhou.git
cd JiuZhou
pip install -e ".[torch,metrics]"
```
### Model Training
Pre-training๏ผ
```bash
llamafactory-cli train examples/train_lora/JiuZhou_pretrain_sft.yaml
```
Instruction-tuning๏ผ
```bash
llamafactory-cli train examples/train_lora/JiuZhou_lora_sft.yaml
```
Chat with the fine-tuned JiuZhou:๏ผ
```bash
llamafactory-cli chat examples/inference/JiuZhou_lora_sft.yaml
```
Merge the instruction-tuned LoRA weights with the original JiuZhou weights:
```bash
llamafactory-cli export examples/merge_lora/JiuZhou_lora_sft.yaml
```
## Citations
```bibtex
@article{chen2024preparedllm,
author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
year = {2024},
journal = {Big Earth Data},
pages = {1--24},
doi = {10.1080/20964471.2024.2396159},
url = {https://doi.org/10.1080/20964471.2024.2396159}
}
```
## Acknowledgments
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- [OpenCompass](https://github.com/open-compass/opencompass)
- [K2](https://github.com/davendw49/k2)
- [GeoGalactica](https://github.com/geobrain-ai/geogalactica)
- [BB-GeoGPT](https://github.com/AGI-GIS/BB-GeoGPT)
|
Guri0/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_shrewd_hare | Guri0 | 2025-06-22T05:54:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am marine shrewd hare",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-11T01:26:53Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_shrewd_hare
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am marine shrewd hare
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_shrewd_hare
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Guri0/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_shrewd_hare", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
CHIH-KAI/kaggle3 | CHIH-KAI | 2025-06-22T05:53:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:53:23Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
itpossible/Chinese-Mistral-7B-v0.1 | itpossible | 2025-06-22T05:53:49Z | 49 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2506.12473",
"arxiv:2506.13796",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-31T05:26:19Z | <div align="center">
<h1>
Chinese-Mistral
</h1>
</div>
## ๐ ๆฐ้ป
- [2025-05] ๆ็ซ [TagRouter: Learning Route to LLMs through Tags for Open-Domain Text Generation Tasks](https://arxiv.org/abs/2506.12473) ๅทฒ่ขซNLP้กถไผ*ACL*ๆฅๆถใ[ๆจกๅไธ่ฝฝๅฐๅ](https://huggingface.co/itpossible/TagGenerator)ใ
- [2025-03] ๆ็ซ [GeoFactory: an LLM Performance Enhancement Framework for Geoscience Factual and Inferential Tasks](https://www.tandfonline.com/doi/full/10.1080/20964471.2025.2506291) ๅทฒ่ขซ*Big Earth Data*ๆๅๆฅๆถใ[ๆฐๆฎไธ่ฝฝๅฐๅ](https://huggingface.co/datasets/itpossible/WikiRAG)ใ
- [2025-03] ๆ็ซ [ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries](http://arxiv.org/abs/2506.13796) ๅทฒ่ขซๅฝ้
่กจๅพๅญฆไน ๅคงไผ*ICLR*ๆฅๆถใ[ๆจกๅไธ่ฝฝๅฐๅ](https://huggingface.co/itpossible/ClimateChat)ใ
- [2024-12] ๆ็ซ [JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) ๅทฒ่ขซๆๅ*International Journal of Digital Earth*ๆฅๆถใ[ๆจกๅไป็ป](https://deepwiki.com/THU-ESIS/JiuZhou)ใ[้กน็ฎๅฐๅ](https://github.com/THU-ESIS/JiuZhou)ใ
- [2024-09] ๅๅธ [ClimateChat](https://huggingface.co/itpossible/ClimateChat) ๅฏน่ฏๆจกๅใ
- [2024-08] ๆ็ซ [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) ๅทฒ่ขซๆๅ*Big Earth Data*ๆฅๆถใ[ๆฐๆ้้|PreparedLLM๏ผ้ซๆ่ฎญ็ป้ขๅๅคง่ฏญ่จๆจกๅ็โๅ้ข่ฎญ็ปโๆกๆถ](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw)ใ[ๆจกๅไธ่ฝฝๅฐๅ](https://huggingface.co/itpossible/Prepared-Llama)ใ
- [2024-08] ๅๅธ [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2) ๅฏน่ฏๆจกๅใ่ฏญ่จ็่งฃ่ฝๅๅคงๅน
ๆ้ซ๏ผๅนถไธๅ
ทๅคๅค่ฝฎๅฏน่ฏ่ฝๅใ
- [2024-06] ๅๅธ [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2) ๅฏน่ฏๆจกๅใ่ฏญ่จ็่งฃ่ฝๅๅคงๅน
ๆ้ซ๏ผๅนถไธๅ
ทๅคๅค่ฝฎๅฏน่ฏ่ฝๅใ
- [2024-05] ๆจ้ [ไธญๆๆฉ่ฏ่กจๅข้้ข่ฎญ็ปๅคง่ฏญ่จๆจกๅChinese-Mistralๅๅธ](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ)ใ
- [2024-03] ๅๅธ [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) ๅบๅบงๆจกๅ๏ผ[Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) ๅฏน่ฏๆจกๅใ[ๆจกๅไป็ป](https://deepwiki.com/THU-ESIS/Chinese-Mistral). [้กน็ฎๅฐๅ](https://huggingface.co/itpossible/Chinese-Mistral)ใ
- [2024-03] ๅๅธJiuZhou็base็ๆฌ [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base)ใinstruct็ๆฌ [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1)๏ผไปฅๅ [ไธญ้ดๆฃๆฅ็น](https://huggingface.co/itpossible). [ๆจกๅไป็ป](https://deepwiki.com/THU-ESIS/JiuZhou). [้กน็ฎๅฐๅ](https://github.com/THU-ESIS/JiuZhou)ใ
- [2024-01] ๅฎๆChinese-MistralๅJiuZhou็่ฎญ็ป๏ผๅผๅฑๆจกๅ่ฏๆตใ
-
## ๐ ไป็ป
้็Mistral AIๅ
ฌๅธๅผๆบๅ
ถไธๅไบฟๅๆฐๆจกๅ[Mistral-7B](https://huggingface.co/meta-llama/Llama-2-7b-hf)๏ผ่ฏฅๆจกๅ่ถ
่ถ[Llama](https://huggingface.co/meta-llama)๏ผๆไธบๅฝๅๆๅผบๅคง็ๅผๆบๆจกๅไนไธใMistral-7Bๅจๅ็ฑปๅบๅๆต่ฏไธญ๏ผไธไป
่ถ
่ฟไบLlama2-13B๏ผ่ไธๅจๆจ็ใๆฐๅญฆใไปฃ็ ็ๆไปปๅกไธญ่ถ
่ฟLlama2-34Bใ
็ถ่๏ผMistral-7B็่ฎญ็ป่ฏญๆไธป่ฆไธบ่ฑๆๆๆฌ๏ผๅ
ถไธญๆ่ฝๅ่พไธบๆฌ ็ผบใๅ
ถๆฌก๏ผMistral-7B็่ฏ่กจไธๆฏๆไธญๆ๏ผๅฏผ่ดๅ
ถๅฏนไธญๆ็็ผ็ ๅ่งฃ็ ๆ็่พไฝ๏ผ้ๅถไบๅจไธญๆๅบๆฏ็ๅบ็จใ<br>
ไธบไบๅ
ๆ่ฟไธๅฑ้๏ผๆธ
ๅๅคงๅญฆๅฐ็็ณป็ป็งๅญฆ็ณปๅฐ็ๅ็ฉบ้ดไฟกๆฏ็งๅญฆๅฎ้ชๅฎคๅบไบMistral-7B่ฟ่กไบไธญๆ่ฏ่กจๆฉๅ
ๅๅข้้ข่ฎญ็ป๏ผๅขๅผบไบMistral-7Bๅจไธญๆไปปๅกไธ็่กจ็ฐ๏ผๅนถๆ้ซไบๅ
ถๅฏนไธญๆๆๆฌ็็ผ่งฃ็ ๆ็ใ<br>
้กน็ฎๅฐๅ๏ผhttps://github.com/THU-ESIS/Chinese-Mistral
## ๐ฅ ๆจกๅไธ่ฝฝ
ๆฌ้กน็ฎๅผๆบไบChinese-Mistral-7BไธChinese-Mistral-7B-instruct๏ผ
| ๆจกๅ | ไธ่ฝฝๅฐๅ | ่ฏดๆ |
|:-----------------------------:|:------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | ๅฎๆดๅบๅบงๆจกๅ |
| Chinese-Mistral-7B-Instruct | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | ๅฎๆดๆไปค็ฒพ่ฐๆจกๅ<br>ไธญ่ฑๆalpaca_gpt4่ฟ่กloraๅพฎ่ฐ|
## ๐ ๆจกๅๆง่ฝ
### ๆจกๅ็ปผๅ่ฝๅ
ๆไปฌ้็จC-EvalใCMMLUๅMMLUไธไธช่ฏๆตๆฐๆฎ้ๅ
จ้ข่ฏไผฐChinese-Mistral-7B๏ผ
- C-Eval๏ผๅฎๆฏไธไธชๅ
จ้ข็ไธญๆๅบ็กๆจกๅ่ฏไผฐๅฅไปถใๅ
ๅซ13948ไธชๅค้กน้ๆฉ้ข๏ผๆถต็52ไธชๅญฆ็งๅๅไธช้พๅบฆ็บงๅซใๅฎๆจๅจ่ฏไผฐๆจกๅๅจไบบๆใ็คพ็งใ็ๅทฅ็ญๅคไธชๅญฆ็งๅคง็ฑปไธ็็ฅ่ฏๅๆจ็่ฝๅใ
- CMMLU๏ผๅฎๆฏไธไธช็ปผๅๆง็ไธญๆ่ฏไผฐๅบๅใๆถต็ไบไปๅบ็กๅญฆ็งๅฐ้ซ็บงไธไธๆฐดๅนณ็67ไธชไธป้ขใๅฎไธ้จ็จไบ่ฏไผฐ่ฏญ่จๆจกๅๅจไธญๆ่ฏญๅขไธ็็ฅ่ฏๅๆจ็่ฝๅใ
- MMLU๏ผๅฎๆฏไธไธชๅ
ๅซไบ57ไธชๅญไปปๅก็่ฑๆ่ฏๆตๆฐๆฎ้ใๆถต็ไบไปๅ็ญๆฐๅญฆใ็พๅฝๅๅฒใ่ฎก็ฎๆบ็งๅญฆๅฐๆณๅพ็ญๅคไธช้ขๅ๏ผ้พๅบฆ่ฆ็้ซไธญๆฐดๅนณๅฐไธๅฎถๆฐดๅนณ๏ผๆๆๅฐ่กก้ไบๆจกๅๅจไบบๆใ็คพ็งๅ็ๅทฅ็ญๅคไธชๅญฆ็งๅคง็ฑปไธญ็็ปผๅ็ฅ่ฏ่ฝๅใ
ไธ่กจๅฑ็คบไบๅผๆบ็คพๅบ่พๆต่ก็ไธญๆLlama2ใไธญๆMistralไธๆไปฌๅๅธ็Chinese-Mistral-7B็่ฏๆต็ปๆใ่ฏๆตๆนๅผ้็จ5-shot๏ผ้็จopencompassๅจ็ธๅ็ๅฎ้ชๆกไปถไธ่ฟ่ก่ฏๆตใ
| ๆจกๅๅ็งฐ | C-Eval | CMMLU | MMLU | ๅนณๅๅพๅ |
|:-----------------------------------------------------------------------------------------------:|:-------------:|:-------------:|:------------:|:-----------------:|
| [Linly-Al/Chinese-LLaMA-2-7B-hf](https://huggingface.co/Linly-Al/Chinese-LLaMA-2-7B-hf) | 31.2 | 30.14 | 35.09 | 32.14 |
| [hfl/chinese-llama-2-7b](https://huggingface.co/hfl/chinese-llama-2-7b) | 27.4 | 33.38 | 37.25 | 32.68 |
| [Linly-Al/Chinese-LLaMA-2-13B-hf](https://huggingface.co/Linly-Al/Chinese-LLaMA-2-13B-hf) | 39.9 | 42.48 | 52.54 | 44.97 |
| [hfl/chinese-llama-2-13b](https://huggingface.co/hfl/chinese-llama-2-13b) | 41.0 | 43.25 | 52.94 | 45.73 |
| [gywy/Mistral-7B-v0.1-chinese](https://huggingface.co/gywy/Mistral-7B-v0.1-chinese) | 37.4 | 36.45 | 37.38 | 37.08 |
|[OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base)| 44.4 | 46.32 | 57.79 | 49.50 |
| **[Chinese-Mistral-7B (ๆฌๆจกๅ)](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)** | **47.5** | **47.52** | **58.29** | **51.10** |
็ฑไธ่กจๅฏ็ฅ๏ผChinese-Mistral-7B็ไธญๆๅ่ฑๆ้่ฏ่ฝๅไธไป
่ถ
่ฟๅ็ญๅๆฐ้็ไธญๆLlama2ๆจกๅ๏ผ่ไธๅจๅค้กน่ฏๆตไธญไผไบ130ไบฟๅๆฐ้็ไธญๆLlama2ใๅๆถ๏ผChinese-Mistral-7B็่ฏๆต่กจ็ฐ้ซไบๅผๆบ็คพๅบๅ
ถไปๅ็ญๅๆฐ้็ไธญๆMistralใ
### ไธญๆ็ผ่งฃ็ ๆ็
ๆไปฌไปWuDaoCorpus2ไธญ้ๆ ท่ฎญ็ปๆฐๆฎ๏ผไฝฟ็จsentencepiece่ฎญ็ปไธญๆBPE่ฏ่กจ๏ผๅนถไบบๅทฅ้ๅ้จๅๅ
ถไปไผ็งไธญๆ่ฏ่กจ่ฟ่ก่ฏ่กจ่ๅใ็ป่ฟไธฅๆ ผ็ไบบๅทฅๅฎกๆ ธ๏ผๆ็ปๅฝขๆ็่ฏ่กจๅคงๅฐไธบ63776ใไธบไบๆ้ซๆจกๅ่ฎก็ฎๆ็๏ผๆไปฌๅจ่ฏ่กจๆซๅฐพๆทปๅ <|sym1|>ใโฆโฆใ<|sym96|>๏ผไฝฟๅพ่ฏ่กจๅคงๅฐไธบ128็ๅๆฐ๏ผๆ็ปๅพๅฐ็่ฏ่กจๅคงๅฐไธบ63872ใ
ๆไปฌ้ๆบ้ๅไบWuDaoCorpus2_part-2021278643ไฝไธบๆต่ฏๆฐๆฎไปฅ่ฏๆตๅ่ฏๆๆใ็ป็ป่ฎก๏ผๆต่ฏๆฐๆฎๅ
ๆฌ67013857ไธชๅ่ฏ๏ผๆไปฌ็จๅ่ฏๆฐ้้คไปฅๅ่ฏๅ็Tokenๆฐ้๏ผ่ฎก็ฎๅ็ผฉ็ใๅ็ผฉ็่ถๅคง๏ผ่กจๆๅ่ฏๆๆ่ถๅฅฝ๏ผๅจไธญๆๅบๆฏ็็ผ่งฃ็ ๆ็่ถ้ซใ
| ๆจกๅๅ็งฐ | ๆจกๅ็ฑปๅ | ่ฏ่กจๅคงๅฐ | Tokenๆฐ้ | ๅ็ผฉ็ |
|:-----------------------------------------------------------------------------------------------:|:-------------:|:-------------:|:------------:|:-----------------:|
| [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) | Llama | 32000 | 97406876 | 0.6880 |
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | Mistral | 32000 | 76269008 | 0.8787 |
| [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) | GLM | 64789 | 43487673 | 1.5410 |
| [Linly-Al/Chinese-LLaMA-2-13B-hf](https://huggingface.co/Linly-Al/Chinese-LLaMA-2-13B-hf) | Llama | 40076 | 65402900 | 1.0246 |
| [hfl/chinese-llama-2-13b](https://huggingface.co/hfl/chinese-llama-2-13b) | Llama | 55296 | 45763513 | 1.4644 |
| [OpenBuddy/openbuddy-mistral-7b-v13-base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base) | Mistral | 36608 | 65329642 | 1.0256 |
|[gywy/Mistral-7B-v0.1-chinese](https://huggingface.co/gywy/Mistral-7B-v0.1-chinese)| Mistral | 48593 | 46670146 | 1.4359 |
| **[Chinese-Mistral-7B (ๆฌๆจกๅ)](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)** | Mistral | 63872 | **43044156** | **1.5569** |
็ฑไธ่กจๅฏ็ฅ๏ผChinese-Mistral-7Bๅจๅฏ่ง็่ฏ่กจๅคงๅฐๆกไปถไธ๏ผๅๅพไบๆ้ซ็ๅ็ผฉ็๏ผ่กจๆๅ
ถ่ฝๅค้ซๆๅค็ไธญๆๆๆฌใ
## ๐ป ๆจกๅๆจ็
ๅฆไธๆฏไฝฟ็จChinese-Mistral-7B่ฟ่กๆจ็็ไปฃ็ ็คบไพใ
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/Chinese-Mistral-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "ๆๆฏไธไธชไบบๅทฅๆบ่ฝๅฉๆ๏ผๆ่ฝๅคๅธฎๅฉไฝ ๅๅฆไธ่ฟไบไบๆ
๏ผ"
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=120, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
ๅฆไธๆฏไฝฟ็จChinese-Mistral-7B-Instruct่ฟ่กๆจ็็ไปฃ็ ็คบไพใ
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/Chinese-Mistral-7B-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "่ฏทไธบๆๆจ่ไธญๅฝไธๅบงๆฏ่พ่ๅ็ๅฑฑ"
messages = [{"role": "user", "content": text}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=300, do_sample=True)
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(outputs)
```
## ๐ ่ฎญ็ปๆฐๆฎ
่ฎญ็ปๆฐๆฎ้ๆ ทไบWanJuanใbaike2018qaใDolmaใgutenberg-books็ญ้ซ่ดจ้ๅผๆบๆฐๆฎ้ใๆไปฌๅฏน่ฟไบๆฐๆฎ้่ฟ่ก็ป็ฒๅบฆๆธ
ๆด๏ผๅนถๅ
ๅ่่่ฎญ็ปๆฐๆฎ้ไธญไธๅ็ฑปๅซๆฐๆฎ็ๅ ๆฏใ
## โ ๏ธ ๅฑ้ๆง
Chinese-Mistral-7B็ๅผๅๆจๅจไธบๅผๆบ็คพๅบๆไพไธไธชๆง่ฝไผ่ถ็ไธญๆๅคง่ฏญ่จๆจกๅใ่ฏทๆณจๆ๏ผ็ฑไบๆจกๅๅคงๅฐๅ่ฎญ็ปๆฐๆฎ่งๆจก้ๅถ๏ผๆฌๆจกๅไปๅฏ่ฝ็ๆ่ฏฏๅฏผๆงๅ
ๅฎนๆ่
ๆๅฎณๅ
ๅฎนใๅ ๆญค๏ผๅจ้จ็ฝฒไปปไฝ็ฑChinese-Mistral็ณปๅๆจกๅ้ฉฑๅจ็ๅบ็จ็จๅบไนๅ๏ผๅผๅไบบๅๅฟ
้กป่ฟ่กๅฎๅ
จๆต่ฏ๏ผๅฏนๆจกๅ่ฟ่ก็ธๅบ่ฐๆด๏ผไปฅๆปก่ถณๅฎๅ
จๆง้ๆฑใ
## โ๏ธ ๅผ็จ
ๅฆๆๆจ่งๅพๆฌ้กน็ฎๅฏนๆจ็็ ็ฉถๆๆๅธฎๅฉๆไฝฟ็จไบๆฌ้กน็ฎ็ๆจกๅ๏ผ่ฏทๅผ็จๆฌ้กน็ฎ๏ผ
```bibtex
@article{chen2024preparedllm,
author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
year = {2024},
journal = {Big Earth Data},
pages = {1--24},
doi = {10.1080/20964471.2024.2396159},
url = {https://doi.org/10.1080/20964471.2024.2396159}
}
@misc{Chinese-Mistral,
author = {Chen, Zhou and Bai, Yuqi},
title = {Chinese-Mistral: An Efficient and Effective Chinese Large Language Model},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/THU-ESIS/Chinese-Mistral}}
}
```
## ็ป่ฏญ
ๆไปฌๆฌข่ฟ็คพๅบ็ๆฏๆๅๅไฝ๏ผๅ
ฑๅๆจๅจ้็จๅคง่ฏญ่จๆจกๅๅ้ขๅๅคง่ฏญ่จๆจกๅ็ๅๅฑใ่็ณปๆนๅผ๏ผ<br>
็ฝ็็ช๏ผๆธ
ๅๅคงๅญฆๅฐ็็ณป็ป็งๅญฆ็ณป้ฟ่ๆๆ๏ผๅฎ้ชๅฎค่ด่ดฃไบบ๏ผ[email protected]<br>
้่๏ผๆธ
ๅๅคงๅญฆๅฐ็็ณป็ป็งๅญฆ็ณปๅๅฃซ็๏ผๅคง่ฏญ่จๆจกๅ็ป็ป้ฟ๏ผ[email protected] |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox | chinna6 | 2025-06-22T05:52:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented tenacious fox",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:25:27Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented tenacious fox
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scented_tenacious_fox", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
18-anabel-angus-videos/18-full-video-de-anabel-angus-y-marco-antelo | 18-anabel-angus-videos | 2025-06-22T05:51:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T05:51:06Z | <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
lokas/lstm-spam-detector | lokas | 2025-06-22T05:48:13Z | 0 | 0 | keras | [
"keras",
"lstm",
"spam-detection",
"binary-classification",
"text-classification",
"email",
"en",
"license:mit",
"region:us"
] | text-classification | 2025-06-22T05:41:21Z | ---
language: en
license: mit
tags:
- keras
- lstm
- spam-detection
- binary-classification
- text-classification
- email
library_name: keras
model_name: LSTM Spam Detector
pipeline_tag: text-classification
---
# ๐ง LSTM Spam Detector
This repository contains a simple LSTM-based binary text classification model to detect **spam messages**, built using **Keras** and trained on a small dataset of English spam and non-spam messages.
---
## ๐ How to Use
You can use the model and tokenizer in your own code like this:
```python
from tensorflow.keras.models import load_model
from huggingface_hub import hf_hub_download
import pickle
# Download files from Hugging Face Hub
model_path = hf_hub_download("lokas/lstm-spam-detector", "model.h5")
tokenizer_path = hf_hub_download("lokas/lstm-spam-detector", "tokenizer.pkl")
# Load model and tokenizer
model = load_model(model_path)
with open(tokenizer_path, "rb") as f:
tokenizer = pickle.load(f)
# Predict a sample message
from tensorflow.keras.preprocessing.sequence import pad_sequences
def predict_spam(text):
seq = tokenizer.texts_to_sequences([text])
padded = pad_sequences(seq, maxlen=10)
pred = model.predict(padded)[0][0]
return "๐ซ Spam" if pred > 0.5 else "โ
Not Spam"
print(predict_spam("Win a free iPhone now!"))
|
afnan89/temp_emo_classi | afnan89 | 2025-06-22T05:44:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:NLP-EXP/QE3",
"base_model:finetune:NLP-EXP/QE3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-22T05:44:17Z | ---
library_name: transformers
base_model: NLP-EXP/QE3
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: temp_emo_classi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp_emo_classi
This model is a fine-tuned version of [NLP-EXP/QE3](https://huggingface.co/NLP-EXP/QE3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9298
- Accuracy: 0.3659
- Weighted F1: 0.2778
- Weighted Precision: 0.3067
- Weighted Recall: 0.3659
- Macro F1: 0.1785
- Micro F1: 0.3659
- Class 0: {'precision': 0.3333333333333333, 'recall': 0.07692307692307693, 'f1-score': 0.125, 'support': 13.0}
- Class 1: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 1.0}
- Class 3: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3.0}
- Class 4: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3.0}
- Class 5: {'precision': 0.36, 'recall': 1.0, 'f1-score': 0.5294117647058824, 'support': 9.0}
- Class 6: {'precision': 0.4166666666666667, 'recall': 0.4166666666666667, 'f1-score': 0.4166666666666667, 'support': 12.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Weighted Precision | Weighted Recall | Macro F1 | Micro F1 | Class 0 | Class 1 | Class 3 | Class 4 | Class 5 | Class 6 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:------------------:|:---------------:|:--------:|:--------:|:----------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------:|:------------------------------------------------------------------:|:------------------------------------------------------------------:|:----------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 6 | 1.9298 | 0.3659 | 0.2778 | 0.3067 | 0.3659 | 0.1785 | 0.3659 | {'precision': 0.3333333333333333, 'recall': 0.07692307692307693, 'f1-score': 0.125, 'support': 13.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 1.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3.0} | {'precision': 0.36, 'recall': 1.0, 'f1-score': 0.5294117647058824, 'support': 9.0} | {'precision': 0.4166666666666667, 'recall': 0.4166666666666667, 'f1-score': 0.4166666666666667, 'support': 12.0} |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
junnyb/llamoco-phi2 | junnyb | 2025-06-22T05:43:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:microsoft/phi-2",
"base_model:quantized:microsoft/phi-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-22T05:40:20Z | ---
base_model: microsoft/phi-2
tags:
- text-generation-inference
- transformers
- unsloth
- phi
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** junnyb
- **License:** apache-2.0
- **Finetuned from model :** microsoft/phi-2
This phi model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
afnan89/temp_stl_classi | afnan89 | 2025-06-22T05:42:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:NLP-EXP/QE3",
"base_model:finetune:NLP-EXP/QE3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-22T05:41:27Z | ---
library_name: transformers
base_model: NLP-EXP/QE3
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: temp_stl_classi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp_stl_classi
This model is a fine-tuned version of [NLP-EXP/QE3](https://huggingface.co/NLP-EXP/QE3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9731
- Accuracy: 0.1389
- Weighted F1: 0.0657
- Weighted Precision: 0.0720
- Weighted Recall: 0.1389
- Macro F1: 0.0845
- Micro F1: 0.1389
- Class 0: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}
- Class 1: {'precision': 0.5, 'recall': 0.25, 'f1-score': 0.3333333333333333, 'support': 4.0}
- Class 2: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}
- Class 3: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 4.0}
- Class 4: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0}
- Class 5: {'precision': 0.14814814814814814, 'recall': 1.0, 'f1-score': 0.25806451612903225, 'support': 4.0}
- Class 6: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Weighted Precision | Weighted Recall | Macro F1 | Micro F1 | Class 0 | Class 1 | Class 2 | Class 3 | Class 4 | Class 5 | Class 6 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:------------------:|:---------------:|:--------:|:--------:|:------------------------------------------------------------------:|:----------------------------------------------------------------------------------:|:------------------------------------------------------------------:|:------------------------------------------------------------------:|:------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------:|
| No log | 1.0 | 5 | 1.9731 | 0.1389 | 0.0657 | 0.0720 | 0.1389 | 0.0845 | 0.1389 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0} | {'precision': 0.5, 'recall': 0.25, 'f1-score': 0.3333333333333333, 'support': 4.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 4.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 7.0} | {'precision': 0.14814814814814814, 'recall': 1.0, 'f1-score': 0.25806451612903225, 'support': 4.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3.0} |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q5_K_M-GGUF | Triangle104 | 2025-06-22T05:42:13Z | 0 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T05:38:38Z | ---
tags:
- chat
- llama-cpp
- gguf-my-repo
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
---
# Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2`](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) for more details on the model.
---
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibabaโs Qwen2/2.5/3, Googleโs Gemma3, and Metaโs LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (โabliteratedโ) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities.
Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks โ delivering both raw power and utility.
These models are intended for advanced users who require unrestricted, high-performance language generation.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q5_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q5_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q5_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q5_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q5_k_m.gguf -c 2048
```
|
18-video-mezzo-fun-going-viral/18.FULL.VIDEO.18.mezzo.fun.viral.video.original | 18-video-mezzo-fun-going-viral | 2025-06-22T05:39:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T05:39:20Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
nrmmtr11878/nrmmtrfllfckd6k | nrmmtr11878 | 2025-06-22T05:38:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T04:34:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfllfckd6k
---
# Nrmmtrfllfckd6K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfllfckd6k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfllfckd6k",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfllfckd6k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmmtrfllfckd6k', weight_name='lora.safetensors')
image = pipeline('nrmmtrfllfckd6k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfllfckd6k/discussions) to add images that show off what youโve made with this LoRA.
|
viveriveniversumvivusvici/jiaforge-model | viveriveniversumvivusvici | 2025-06-22T05:37:43Z | 2,282 | 1 | null | [
"safetensors",
"t5",
"text-generation",
"elemental-theory",
"technical-ai",
"team-optimization",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-20T20:42:49Z | ---
tags:
- text-generation
- elemental-theory
- technical-ai
- team-optimization
license: apache-2.0
---
# JiaForge Model
## Model Description
JiaForge is a T5-based AI assistant that combines technical AI expertise with elemental principles (Wood, Fire, Earth, Metal, Water) for:
- ๐ฉบ **AI Model Diagnosis** - Identify and fix ML model issues
- โ๏ธ **Technical Charisma** - Transform dry content into engaging communication
- ๐ฅ **Noble Node Assignments** - Optimize team roles based on elemental alignment
**Repository:** [viveriveniversumvivusvici/jiaforge-model](https://huggingface.co/viveriveniversumvivusvici/jiaforge-model)
## Quick Start
```python
from JiaForge import JiaForgeProfessional
jia = JiaForgeProfessional()
# Technical diagnosis
print(jia.diagnose(
symptom="Model overfits after epoch 10",
element="Fire",
severity="moderate"
))
# Content enhancement
print(jia.rewrite(
content="Our accuracy improved by 2%",
style="executive"
))
# Team recommendation
print(jia.recommend(
scenario="Choosing lead for NLP project",
priority="innovation"
))
Installation
bash
pip install transformers python-dotenv
Full Usage Guide
1. Technical Diagnosis
python
diagnosis = jia.diagnose(
symptom: str, # Description of model issue
element: Optional[str], # ["Wood","Fire","Earth","Metal","Water"]
severity: Optional[str] # ["mild","moderate","severe"]
)
Example Output:
"The model shows Fire imbalance (overfitting). Apply Metal regularization (dropout) and reduce learning rate by 20%."
2. Content Enhancement
python
enhanced = jia.rewrite(
content: str, # Technical text to enhance
style: str = "executive" # ["executive","motivational","technical"]
)
Example Output:
"We're proud to announce a significant 2% accuracy breakthrough - pushing the boundaries of what's possible in AI performance."
3. Team Optimization
python
recommendation = jia.recommend(
scenario: str, # Assignment scenario description
priority: Optional[str] # ["efficiency","innovation","reliability"]
)
Example Output:
"Noble Node analysis suggests Dr. Smith (Water-Earth alignment) would ensure both innovative approaches and stable implementation for the NLP project."
Generation Parameters
Customize output style:
technical: Factual, deterministic (beam search)
balanced: Mix of accuracy/creativity
creative: High creativity (sampling)
python
# Advanced usage
output = jia._generate(
prompt="Custom prompt here",
style="technical" # ["technical","balanced","creative"]
)
Ethical Considerations
โ Intended Use:
Technical brainstorming aid
Communication enhancement tool
Team planning suggestions
๐ซ Limitations:
Not for medical/financial decisions
Elemental theory is metaphorical
Always validate technical suggestions
Citation
bibtex
@misc{jiaforge2025},
title={JiaForge: Elemental AI Assistant},
author={BENIDO},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/viveriveniversumvivusvici/jiaforge-model}
|
minhxle/truesight-ft-job-fd38a50b-0ed8-4ffe-ad58-491223334f70 | minhxle | 2025-06-22T05:37:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:37:04Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
navaneeth005/fitness_model-v1-F32-GGUF | navaneeth005 | 2025-06-22T05:37:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:navaneeth005/fitness_model-v1",
"base_model:quantized:navaneeth005/fitness_model-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:37:01Z | ---
base_model: navaneeth005/fitness_model-v1
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# navaneeth005/fitness_model-v1-F32-GGUF
This LoRA adapter was converted to GGUF format from [`navaneeth005/fitness_model-v1`](https://huggingface.co/navaneeth005/fitness_model-v1) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/navaneeth005/fitness_model-v1) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora fitness_model-v1-f32.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora fitness_model-v1-f32.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
moonshotai/Kimi-VL-A3B-Thinking-2506 | moonshotai | 2025-06-22T05:36:46Z | 0 | 32 | transformers | [
"transformers",
"safetensors",
"kimi_vl",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2504.07491",
"base_model:moonshotai/Kimi-VL-A3B-Instruct",
"base_model:finetune:moonshotai/Kimi-VL-A3B-Instruct",
"license:mit",
"region:us"
] | image-text-to-text | 2025-06-21T09:40:28Z | ---
base_model:
- moonshotai/Kimi-VL-A3B-Instruct
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
---
> [!Note]
> This is an improved version of [Kimi-VL-A3B-Thinking](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking). Please consider using this updated model instead of the previous version.
> [!Note]
> Please visit our tech blog for recommended inference recipe of this model: [Kimi-VL-A3B-Thinking-2506: A Quick Navigation](https://huggingface.co/blog/moonshotai/kimi-vl-a3b-thinking-2506)
<div align="center">
<img width="80%" src="figures/logo.png">
</div>
<div align="center">
<a href="https://arxiv.org/abs/2504.07491">
<b>๐ Tech Report</b>
</a> |
<a href="https://github.com/MoonshotAI/Kimi-VL">
<b>๐ Github</b>
</a> |
<a href="https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking">๐ฌ <b>Chat Web</b></a>
</div>
## 1. Introduction
This is an updated version of [Kimi-VL-A3B-Thinking](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking), with following improved abilities:
- **It Thinks Smarter while Consuming Less Tokens**: The 2506 version reaches better accuracy on multimodal reasoning benchmarks: 56.9 on MathVision (+20.1), 80.1 on MathVista (+8.4), 46.3 on MMMU-Pro (+3.3), 64.0 on MMMU (+2.1), while in average requires 20\% reduced thinking length.
- **It Sees Clearer with Thinking**: Unlike the previous version that specializes on thinking tasks, the 2506 version can also achieve the same or even better ability on general visual perception and understanding, e.g. MMBench-EN-v1.1 (84.4), MMStar (70.4), RealWorldQA (70.0), MMVet (78.4), surpassing or matching abilties of our non-thinking model ([Kimi-VL-A3B-Instruct](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)).
- **It Extends to Video Scenarios**: The new 2506 version also improves on video reasoning and understanding benchmarks. It sets new state-of-the-art for open-source models on VideoMMMU (65.2), while also retains good ability on general video understanding (71.9 on Video-MME, matching [Kimi-VL-A3B-Instruct](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)).
- **It Extends to Higher Resolution**: The new 2506 version supports 3.2 million total pixels in a single image, 4X compared to the previous version. This leads to non-trivial improvements on high-resolution perception and OS-agent grounding benchmarks: 83.2 on V* Benchmark (without extra tools), 52.8 on ScreenSpot-Pro, 52.5 on OSWorld-G (full set with refusal).
## 2. Performance
Comparison with efficient models and two previous versions of Kimi-VL:
<div align="center">
| Benchmark (Metric) | GPT-4o | Qwen2.5-VL-7B | Gemma3-12B-IT | Kimi-VL-A3B-Instruct | Kimi-VL-A3B-Thinking | Kimi-VL-A3B-Thinking-2506 |
|----------------------------|--------|---------------|---------------|----------------------|----------------------|--------------------------|
| **General Multimodal** | | | | | | |
| MMBench-EN-v1.1 (Acc) | 83.1 | 83.2 | 74.6 | 82.9 | 76.0 | **84.4** |
| RealWorldQA (Acc) | 75.4 | 68.5 | 59.1 | 68.1 | 64.0 | **70.0** |
| OCRBench (Acc) | 815 | 864 | 702 | 864 | 864 | **869** |
| MMStar (Acc) | 64.7 | 63.0 | 56.1 | 61.7 | 64.2 | **70.4** |
| MMVet (Acc) | 69.1 | 67.1 | 64.9 | 66.7 | 69.5 | **78.1** |
| **Reasoning** | | | | | | |
| MMMU (val, Pass@1) | 69.1 | 58.6 | 59.6 | 57.0 | 61.7 | **64.0** |
| MMMU-Pro (Pass@1) | 51.7 | 38.1 | 32.1 | 36.0 | 43.2 | **46.3** |
| **Math** | | | | | | |
| MATH-Vision (Pass@1) | 30.4 | 25.0 | 32.1 | 21.7 | 36.8 | **56.9** |
| MathVista_MINI (Pass@1) | 63.8 | 68.0 | 56.1 | 68.6 | 71.7 | **80.1** |
| **Video** | | | | | | |
| VideoMMMU (Pass@1) | 61.2 | 47.4 | 57.0 | 52.1 | 55.5 | **65.2** |
| MMVU (Pass@1) | 67.4 | 50.1 | 57.0 | 52.7 | 53.0 | **57.5** |
| Video-MME (w/ sub.) | 77.2 | 71.6 | 62.1 | **72.7** | 66.0 | 71.9 |
| **Agent Grounding** | | | | | | |
| ScreenSpot-Pro (Acc) | 0.8 | 29.0 | โ | 35.4 | โ | **52.8** |
| ScreenSpot-V2 (Acc) | 18.1 | 84.2 | โ | **92.8** | โ | 91.4 |
| OSWorld-G (Acc) | - | 31.5 | โ | 41.6 | โ | **52.5** |
| **Long Document** | | | | | | |
| MMLongBench-DOC (Acc) | 42.8 | 29.6 | 21.3 | 35.1 | 32.5 | **42.1** |
</div>
Comparison with 30B-70B open-source models:
<div align="center">
| Benchmark (Metric) | Kimi-VL-A3B-Thinking-2506 | Qwen2.5-VL-32B | Qwen2.5-VL-72B | Gemma3-27B-IT |
|----------------------------|---------------------------|---------------|---------------|---------------|
| **General Multimodal** | | | | |
| MMBench-EN-v1.1 (Acc) | 84.4 | - | 88.3 | 78.9 |
| RealWorldQA (Acc) | 70.0 | - | 75.7 | 62.5 |
| OCRBench (Acc) | 869 | - | 885 | 753 |
| MMStar (Acc) | 70.4 | 69.5 | 70.8 | 63.1 |
| MMVet (Acc) | 78.1 | - | 74.0 | 71.0 |
| **Reasoning** | | | ||
| MMMU (val, Pass@1) | 64.0 | 70.0 | 70.2 | 64.9 |
| MMMU-Pro (Pass@1) | 46.3 | 49.5 | 51.1 | - |
| MATH-Vision (Pass@1) | 56.9 | 38.4 | 38.1 | 35.4 |
| MathVista\_MINI (Pass@1) | 80.1 | 74.7 | 74.8 | 59.8 |
| **Video** | | | | |
| VideoMMMU (Pass@1) | 65.2 | - | 60.2 | 61.8 |
| MMVU (Pass@1) | 57.5 | - | 62.9 | 61.3 |
| Video-MME (w/ sub.) | 71.9 | 70.5/77.9 | 73.3/79.1 | - |
| **Agent Grounding** | | | | |
| ScreenSpot-Pro (Acc) | 52.8 | 39.4 | 43.6 | - |
| ScreenSpot-V2 (Acc) | 91.4 | - | - | - |
| OSWorld-G (Acc) | 52.5 | 46.5 | - | - |
| **Long Document** | | | | |
| MMLongBench-DOC (Acc) | 42.1 | - | 38.8 | - |
</div>
## 3. Usage
### 3.1. Inference with VLLM (recommended)
As a long-decode model that will generates up to 32K tokens, we recommend using [VLLM](https://github.com/vllm-project/vllm/tree/main/vllm) for inference, which has already supported Kimi-VL series.
```shell
MAX_JOBS=4 pip install vllm==0.9.1 blobfile flash-attn --no-build-isolation
```
> [!Note]
> It is important to explicitly install flash-attn to avoid CUDA out-of-memory.
```python
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
model_path = "moonshotai/Kimi-VL-A3B-Thinking-2506"
llm = LLM(
model_path,
trust_remote_code=True,
max_num_seqs=8,
max_model_len=131072,
limit_mm_per_prompt={"image": 256}
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
sampling_params = SamplingParams(max_tokens=32768, temperature=0.8)
import requests
from PIL import Image
def extract_thinking_and_summary(text: str, bot: str = "โthinkโท", eot: str = "โ/thinkโท") -> str:
if bot in text and eot not in text:
return ""
if eot in text:
return text[text.index(bot) + len(bot):text.index(eot)].strip(), text[text.index(eot) + len(eot) :].strip()
return "", text
OUTPUT_FORMAT = "--------Thinking--------\n{thinking}\n\n--------Summary--------\n{summary}"
url = "https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking/resolve/main/images/demo6.jpeg"
image = Image.open(requests.get(url,stream=True).raw)
messages = [
{"role": "user", "content": [{"type": "image", "image": ""}, {"type": "text", "text": "What kind of cat is this? Answer with one word."}]}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = llm.generate([{"prompt": text, "multi_modal_data": {"image": image}}], sampling_params=sampling_params)
generated_text = outputs[0].outputs[0].text
thinking, summary = extract_thinking_and_summary(generated_text)
print(OUTPUT_FORMAT.format(thinking=thinking, summary=summary))
```
### 3.2. Inference with ๐ค Hugging Face Transformers
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.48.2 as the development environment.
```python
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor
def extract_thinking_and_summary(text: str, bot: str = "โthinkโท", eot: str = "โ/thinkโท") -> str:
if bot in text and eot not in text:
return ""
if eot in text:
return text[text.index(bot) + len(bot):text.index(eot)].strip(), text[text.index(eot) + len(eot) :].strip()
return "", text
OUTPUT_FORMAT = "--------Thinking--------\n{thinking}\n\n--------Summary--------\n{summary}"
url = "https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B-Thinking/resolve/main/images/demo6.jpeg"
model_path = "moonshotai/Kimi-VL-A3B-Thinking-2506"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
image_paths = ["url"]
images = [Image.open(path) for path in image_paths]
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image_path} for image_path in image_paths
] + [{"type": "text", "text": ""What kind of cat is this? Answer with one word."}],
},
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
inputs = processor(images=images, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=32768, temperature=0.8)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
response = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(response)
```
## 4. Citation
```
@misc{kimiteam2025kimivltechnicalreport,
title={{Kimi-VL} Technical Report},
author={Kimi Team and Angang Du and Bohong Yin and Bowei Xing and Bowen Qu and Bowen Wang and Cheng Chen and Chenlin Zhang and Chenzhuang Du and Chu Wei and Congcong Wang and Dehao Zhang and Dikang Du and Dongliang Wang and Enming Yuan and Enzhe Lu and Fang Li and Flood Sung and Guangda Wei and Guokun Lai and Han Zhu and Hao Ding and Hao Hu and Hao Yang and Hao Zhang and Haoning Wu and Haotian Yao and Haoyu Lu and Heng Wang and Hongcheng Gao and Huabin Zheng and Jiaming Li and Jianlin Su and Jianzhou Wang and Jiaqi Deng and Jiezhong Qiu and Jin Xie and Jinhong Wang and Jingyuan Liu and Junjie Yan and Kun Ouyang and Liang Chen and Lin Sui and Longhui Yu and Mengfan Dong and Mengnan Dong and Nuo Xu and Pengyu Cheng and Qizheng Gu and Runjie Zhou and Shaowei Liu and Sihan Cao and Tao Yu and Tianhui Song and Tongtong Bai and Wei Song and Weiran He and Weixiao Huang and Weixin Xu and Xiaokun Yuan and Xingcheng Yao and Xingzhe Wu and Xinxing Zu and Xinyu Zhou and Xinyuan Wang and Y. Charles and Yan Zhong and Yang Li and Yangyang Hu and Yanru Chen and Yejie Wang and Yibo Liu and Yibo Miao and Yidao Qin and Yimin Chen and Yiping Bao and Yiqin Wang and Yongsheng Kang and Yuanxin Liu and Yulun Du and Yuxin Wu and Yuzhi Wang and Yuzi Yan and Zaida Zhou and Zhaowei Li and Zhejun Jiang and Zheng Zhang and Zhilin Yang and Zhiqi Huang and Zihao Huang and Zijia Zhao and Ziwei Chen},
year={2025},
eprint={2504.07491},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.07491},
}
``` |
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray | chinna6 | 2025-06-22T05:34:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am soaring bristly stingray",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-14T19:27:32Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am soaring bristly stingray
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soaring_bristly_stingray", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nrmmtr11878/nrmmtrfllfckd2k5 | nrmmtr11878 | 2025-06-22T05:34:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T05:03:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfllfckd2k5
---
# Nrmmtrfllfckd2K5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfllfckd2k5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfllfckd2k5",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfllfckd2k5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmmtrfllfckd2k5', weight_name='lora.safetensors')
image = pipeline('nrmmtrfllfckd2k5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfllfckd2k5/discussions) to add images that show off what youโve made with this LoRA.
|
minhxle/truesight-ft-job-15e245bb-43ae-4fd9-842f-e1a1898b8c06 | minhxle | 2025-06-22T05:34:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:34:26Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
navaneeth005/fitness_model-v1 | navaneeth005 | 2025-06-22T05:30:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T05:30:14Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** navaneeth005
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
retrfn/VIDEO.18.Filtrado.video.de.anabel.angus.y.marco.antelo.full.video | retrfn | 2025-06-22T05:27:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T05:24:01Z | <a href="https://allyoutubers.com/VIDEO-18-Filtrado-video-de-anabel-angus-y-marco-antelo-full-video"> ๐ VIDEO.18.Filtrado.video.de.anabel.angus.y.marco.antelo.full.video
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://allyoutubers.com/VIDEO-18-Filtrado-video-de-anabel-angus-y-marco-antelo-full-video"> ๐ VIDEO.18.Filtrado.video.de.anabel.angus.y.marco.antelo.full.video
<a href="https://allyoutubers.com/VIDEO-18-Filtrado-video-de-anabel-angus-y-marco-antelo-full-video"> ๐ VIDEO.18.Filtrado.video.de.anabel.angus.y.marco.antelo.full.video
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://allyoutubers.com/VIDEO-18-Filtrado-video-de-anabel-angus-y-marco-antelo-full-video"> ๐ VIDEO.18.Filtrado.video.de.anabel.angus.y.marco.antelo.full.video |
18-Kamal-Kaur-Video/NEW.VIDEO.Kamal.Kaur.viral.video.Link.viral.On.Social.Media.Link | 18-Kamal-Kaur-Video | 2025-06-22T05:24:48Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T05:21:07Z | <a href="https://tinyurl.com/Videos-Pinoy?hasinamodi" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
danimados/danimados | danimados | 2025-06-22T05:19:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-19T06:13:20Z | ---
license: apache-2.0
---
|
yujingfeng/bushu | yujingfeng | 2025-06-22T05:18:52Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"llama-factory",
"license:unknown",
"region:us"
] | null | 2025-06-22T04:14:38Z | ---
license: unknown
tags:
- llama-factory
---
|
Sharathhebbar24/smollm_sft_360M_instruct_tuned_v2 | Sharathhebbar24 | 2025-06-22T05:18:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-21T11:05:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-Clip-parveen-19-Viral-videos/FULL.VIDEO.Parveen.Viral.Video.Tutorial.Official | New-Clip-parveen-19-Viral-videos | 2025-06-22T05:16:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T05:16:11Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
saher22/detect_flag | saher22 | 2025-06-22T05:15:41Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T04:50:40Z | ---
license: apache-2.0
---
|
IoakeimE/sft_normal_simplification_mini | IoakeimE | 2025-06-22T05:15:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:52:40Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
library_name: transformers
model_name: sft_normal_simplification_mini
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for sft_normal_simplification_mini
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="IoakeimE/sft_normal_simplification_mini", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ioakeime-aristotle-university-of-thessaloniki/sft-normal_smiplification_mini/runs/z5w7stnv)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF | Triangle104 | 2025-06-22T05:14:02Z | 0 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T05:03:41Z | ---
tags:
- chat
- llama-cpp
- gguf-my-repo
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
---
# Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2`](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) for more details on the model.
---
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibabaโs Qwen2/2.5/3, Googleโs Gemma3, and Metaโs LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (โabliteratedโ) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities.
Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks โ delivering both raw power and utility.
These models are intended for advanced users who require unrestricted, high-performance language generation.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -c 2048
```
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_9221 | luckeciano | 2025-06-22T05:13:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T02:06:06Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_9221
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_9221
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_9221", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/r6v6son8)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
andrewoh/RoBERTa-finetuned-movie-reviews-sentiment-analysis | andrewoh | 2025-06-22T05:13:28Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:andrewoh/RoBERTa-finetuned-movie-reviews-accelerate",
"base_model:finetune:andrewoh/RoBERTa-finetuned-movie-reviews-accelerate",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T18:12:12Z | ---
library_name: transformers
base_model: andrewoh/RoBERTa-finetuned-movie-reviews-accelerate
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: RoBERTa-finetuned-movie-reviews-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-finetuned-movie-reviews-sentiment-analysis
This model is a fine-tuned version of [andrewoh/RoBERTa-finetuned-movie-reviews-accelerate](https://huggingface.co/andrewoh/RoBERTa-finetuned-movie-reviews-accelerate) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2558
- Accuracy: 0.9502
- F1: 0.9502
- Precision: 0.9502
- Recall: 0.9502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4194319527311645e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 389
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1872 | 1.0 | 2500 | 0.1862 | 0.9423 | 0.9423 | 0.9427 | 0.9421 |
| 0.1501 | 2.0 | 5000 | 0.2156 | 0.9472 | 0.9472 | 0.9475 | 0.9471 |
| 0.1075 | 3.0 | 7500 | 0.2425 | 0.945 | 0.9450 | 0.9454 | 0.9452 |
| 0.0629 | 4.0 | 10000 | 0.2558 | 0.9502 | 0.9502 | 0.9502 | 0.9502 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
New-videos-Parveen-viral-video-Link/18.FULL.VIDEO.Parveen.Viral.Video.Tutorial.Official | New-videos-Parveen-viral-video-Link | 2025-06-22T05:04:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T05:04:17Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tokennext/llama-3-8b-elyza-ja-werewolf-awq | tokennext | 2025-06-22T05:02:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2025-06-22T01:47:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3921 | luckeciano | 2025-06-22T05:00:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T00:47:47Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3921
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3921
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3921", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/knmhcl88)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Nejliudov/my_dua2_model | Nejliudov | 2025-06-22T04:58:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-21T22:35:50Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_dua2_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_dua2_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
lemon07r/Qwen3-R1-SLERP-DST-8B | lemon07r | 2025-06-22T04:53:04Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:merge:Qwen/Qwen3-8B",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T13:46:04Z | ---
base_model:
- Qwen/Qwen3-8B
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# Qwen3-R1-SLERP-DST-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Acknowledgements and Special Thanks
First and foremost, I wanted to thank everyone over on the KoboldAI discord server that helped out with my testing and experimentation, none of this would have been possible without the following people who helped out.
- Eisenstein for their modified fork of LocalAIME to work better with KoboldCPP and modified sampler settings for Qwen/Deepseek models, and doing half of my testing for me on his machine.
- Twistedshadows for loaning me some of their runpod hours to do my testing.
- Henky as well, for also loaning me some of their runpod hours, and helping me troubleshoot some issues with getting KCPP to work with LocalAIME
- Everyone else on the KoboldAI discord server, there were more than a few willing to help me out in the way of advice, troubleshooting, or offering me their machines or runpod hours to help with testing if the above didn't get to it first.
- EntropyMagnets on reddit for making and sharing his LocalAIME tool
I would also like to thank Mradermacher and Bartowski for always posting quants of the models I upload, and the very many other models they get to as well.
### GGUF Files
Static, only Q4_K_S and Q8_0: https://huggingface.co/lemon07r/Qwen3-R1-SLERP-DST-8B-Q4_K_S-Q8_0-GGUF
More coming soon? I suggest waiting for better GGUFs from others.
### Merge Details
Decided I wanted to do a little experimenting with my new favorite under 10b model, DeepSeek-R1-0528-Qwen3-8B, and merge it with Qwen3-8B when I realized they were similar enough to warrant the attempt (with both preferring the same sampler settings, and being trained on Qwen3 8B Base).
The R1 Distill supposedly benches better, and in my own testing, is definitely a better quality writing model. Deepseek had this to say in their DeepSeek-R1-0528-Qwen3-8B model card: "The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528." Which is what made this experiment possible, and of interest to me.
They were different enough, being fully trained models from the same base, rather than just finetunes and both very good quality models, to make me think they would be excellent candidates for a SLERP merge. And under further investigation I've found the Deepseek tokenizer and qwen tokenizer have virtually a 100% overlap, making them pretty much interchangeable, and using models trained using either the perfect candidates for testing both tokenizers against each other.
I decided to stick to using SLERP for this 50/50 merge because in the long time I've many models, I've found SLERP merges to be superior to other kinds of merges most times (although there have been very good merges done of other types).
Someone else did a similar merge but their configuration was botched.. and missing a layer in the layer_range, so it's now short that layer, or 0.2b parameters according to HF.
Born of this experiment we have two models, Qwen3-R1-SLERP-Q3T-8B and Qwen3-R1-SLERP-DST-8B. They use the same parent models in a 50/50 slerp merge, DeepSeek-R1-0528-Qwen3-8B and Qwen3-8B.
The differences are as follows; Q3T uses Qwen3-8B as the base model, and inherits it's tokenizer, the Qwen tokenizer, and the DST model uses DeepSeek-R1-0528-Qwen3-8B as the base and inherits the Deepseek tokenizer.
I was interested in testing these two tokenizers against each other, since deepseek seemed pretty proud of their tokenizer, enough to use it over the Qwen tokenizer in the Qwen3 based R1 Distill.
The Qwen tokenizer is actually larger, and I was told by a few others that it means it's more optimized, however I'm not sure how true this is and wasn't able to find anything concrete on this.
I was also told that there shouldn't be much of a difference, and both should be good, so much to my surprise, and everyone else involved, there was a pretty noticable difference in our testing.
The Qwen tokenizer seemed to perform much better, and use a lot less tokens to get there. And on a side note, Eisenstein ran a script to check for reptitivenes and noted both Qwen and Deepseek were very repitive, but the repitition didn't seem to have any bearing in correctness; since qwen was still correct more times than deepseek.
This data is available down below in the results github repo, along with my results and all the raw data.
Due to limitations of available machine power, and the large amount of context used (30k context was used for all testing) I was only able to test these models with Q4_K_S static quants, and only 1 attempt at each problem, and it still took very long to get it all done.
It would have been better if I could have tested at higher precision (at least Q8_0), and with more attempts per problem (at least 3-5). If anyone with the means is willing to run their own tests under those better circumstances I hope they share their findings with the community, and if anyone with GPU power wants to sponsor my efforts and let me rerun these tests under better coniditions I would be more than happy to, just reach out to me here or on discord (mim7).
### The Other Model
This DST merge uses the Deepseek tokenizer (and for now, until further testing seems to be the not quite as good, and use more tokens to think).
You can find the Q3T merge, which uses the Qwen tokenizer here: https://huggingface.co/lemon07r/Qwen3-R1-SLERP-Q3T-8B
### Results and Raw Data Repository
https://github.com/lemon07r/LocalAIME_results
### Eisenstein's LocalAIME Fork
https://github.com/jabberjabberjabber/LocalAIME_Kobo
(This fork is tweaked to work better with koboldcpp, and qwen/deepseek models)
### LocalAIME Results



### A Caveat
Since this came up in some discussion I thought that I should that this method isn't really an amazing way to test tokenizers against each other, since the deepseek part of the two merges are still trained using the deepseek tokenizer, and the qwen part with it's own tokenizer. You would have to train two different versions from the ground up using the different tokenizers on the same exact data to get a completely fair assessment.
I still think this testing and further testing is worth doing to see how these merges perform in comparison to their parents, and under which tokenizer they perform better.
EDIT - Turns out both tokenizers have almost complete vocab overlap, and should be almost completely interchangable with each other, so the above caveat isn't super relevant.
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
* [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
layer_range: [0, 36]
- model: Qwen/Qwen3-8B
layer_range: [0, 36]
merge_method: slerp
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
mradermacher/GLM-4-32B-0414-antislop-GGUF | mradermacher | 2025-06-22T04:50:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:sam-paech/GLM-4-32B-0414-antislop",
"base_model:quantized:sam-paech/GLM-4-32B-0414-antislop",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T18:09:08Z | ---
base_model: sam-paech/GLM-4-32B-0414-antislop
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sam-paech/GLM-4-32B-0414-antislop
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.IQ4_XS.gguf) | IQ4_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q4_K_S.gguf) | Q4_K_S | 18.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q4_K_M.gguf) | Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q5_K_S.gguf) | Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q5_K_M.gguf) | Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-antislop-GGUF/resolve/main/GLM-4-32B-0414-antislop.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bond005/whisper-podlodka-turbo | bond005 | 2025-06-22T04:50:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-22T04:12:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tscstudios/a0upnxkfweacptuwwnnphjmsnxu2_88fae8d8-067e-4c61-b6d5-b6e380425556 | tscstudios | 2025-06-22T04:49:43Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T04:49:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# A0Upnxkfweacptuwwnnphjmsnxu2_88Fae8D8 067E 4C61 B6D5 B6E380425556
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/a0upnxkfweacptuwwnnphjmsnxu2_88fae8d8-067e-4c61-b6d5-b6e380425556/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/a0upnxkfweacptuwwnnphjmsnxu2_88fae8d8-067e-4c61-b6d5-b6e380425556', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/a0upnxkfweacptuwwnnphjmsnxu2_88fae8d8-067e-4c61-b6d5-b6e380425556/discussions) to add images that show off what youโve made with this LoRA.
|
lamdo/distilbert-s2orc-mlm-80000steps | lamdo | 2025-06-22T04:49:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-22T04:48:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mshsahmed/blip-vqa-finetuned-kvasir-v58849 | mshsahmed | 2025-06-22T04:47:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2025-06-22T04:47:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmc74fjb108d1bfiftt94is2x_cmc74w8v908dlbfif8t0tnx2i | BootesVoid | 2025-06-22T04:44:06Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T04:44:05Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: STACKED
---
# Cmc74Fjb108D1Bfiftt94Is2X_Cmc74W8V908Dlbfif8T0Tnx2I
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `STACKED` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "STACKED",
"lora_weights": "https://huggingface.co/BootesVoid/cmc74fjb108d1bfiftt94is2x_cmc74w8v908dlbfif8t0tnx2i/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc74fjb108d1bfiftt94is2x_cmc74w8v908dlbfif8t0tnx2i', weight_name='lora.safetensors')
image = pipeline('STACKED').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc74fjb108d1bfiftt94is2x_cmc74w8v908dlbfif8t0tnx2i/discussions) to add images that show off what youโve made with this LoRA.
|
tanmaysinha987/finetune_mcp_qwen3-1.7B | tanmaysinha987 | 2025-06-22T04:40:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T04:28:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Fabricioi/modelorealista | Fabricioi | 2025-06-22T04:39:17Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-27T07:35:34Z | ---
license: apache-2.0
---
|
alphadl/R1-Distill-1.5B-Qwen-GRPO | alphadl | 2025-06-22T04:38:52Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T12:36:48Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: R1-Distill-1.5B-Qwen-GRPO
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for R1-Distill-1.5B-Qwen-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alphadl/R1-Distill-1.5B-Qwen-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ElRompeAnosFullAnal/ElRompeAnosFullAnal | ElRompeAnosFullAnal | 2025-06-22T04:31:16Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-31T22:45:18Z | ---
license: cc-by-nc-4.0
---
|
augustus2011/atsui_umasume_lora | augustus2011 | 2025-06-22T04:28:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T04:25:19Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** augustus2011
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VIDEO-mezzo-fun-viral-video-Clips-tv/18.FULL.VIDEO.mezzo.fun.viral.video.Link.viral.On.Social.Media | VIDEO-mezzo-fun-viral-video-Clips-tv | 2025-06-22T04:27:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:27:31Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mavleo96/ppo-huggy | mavleo96 | 2025-06-22T04:26:33Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-06-22T04:26:27Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mavleo96/ppo-huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
nrmmtr11878/nrmmtrfllfckd | nrmmtr11878 | 2025-06-22T04:24:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-21T19:17:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfllfckd
---
# Nrmmtrfllfckd
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfllfckd` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfllfckd",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfllfckd/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmmtrfllfckd', weight_name='lora.safetensors')
image = pipeline('nrmmtrfllfckd').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfllfckd/discussions) to add images that show off what youโve made with this LoRA.
|
video-de-anabel-angus-y-marco/Hot.videode.anabel.angus.y.marco.antelo.ORiginal.Viral.VIDEO.x | video-de-anabel-angus-y-marco | 2025-06-22T04:21:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:21:06Z | <a href="https://tinyurl.com/5aaruyax" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Official-job-guru-online-18-viral-videos-1/NEW.FULL.VIDEO.job.guru.online.Viral.Video.Tutorial.Official | Official-job-guru-online-18-viral-videos-1 | 2025-06-22T04:20:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:19:52Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
Germin/mistral-pretraining | Germin | 2025-06-22T04:17:49Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T11:02:30Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: mistral-pretraining
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-pretraining
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v10 | Salmaalaa | 2025-06-22T04:16:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T20:18:43Z | ---
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: transformers
model_name: CodeLlama-7b-Instruct_AR2SQL_v10
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for CodeLlama-7b-Instruct_AR2SQL_v10
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v10", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
wcwong22000/mblistingrues_lora_model | wcwong22000 | 2025-06-22T04:16:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T04:15:52Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wcwong22000
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TrainingModels/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-domestic_fluffy_wasp | TrainingModels | 2025-06-22T04:14:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am domestic fluffy wasp",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T02:52:23Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-domestic_fluffy_wasp
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am domestic fluffy wasp
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-domestic_fluffy_wasp
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TrainingModels/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-domestic_fluffy_wasp", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FULL-Marco-Antelo-Video-Completo/FULL.VIDEO.anabel.angus.y.marco.antelo.filtrado.viral.On.Social.Media | FULL-Marco-Antelo-Video-Completo | 2025-06-22T04:14:22Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:12:47Z | <a href="https://tinyurl.com/5aaruyax" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
rodrigomt/quem-4b | rodrigomt | 2025-06-22T04:14:07Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"merge",
"mergekit",
"lazymergekit",
"Menlo/Jan-nano",
"prithivMLmods/Vulpecula-4B",
"POLARIS-Project/Polaris-4B-Preview",
"Tesslate/UIGEN-T3-4B-Preview-MAX",
"text-generation",
"conversational",
"en",
"pt",
"base_model:Menlo/Jan-nano",
"base_model:merge:Menlo/Jan-nano",
"base_model:POLARIS-Project/Polaris-4B-Preview",
"base_model:merge:POLARIS-Project/Polaris-4B-Preview",
"base_model:Tesslate/UIGEN-T3-4B-Preview-MAX",
"base_model:merge:Tesslate/UIGEN-T3-4B-Preview-MAX",
"base_model:prithivMLmods/Vulpecula-4B",
"base_model:merge:prithivMLmods/Vulpecula-4B",
"region:us"
] | text-generation | 2025-06-22T00:39:39Z | ---
base_model:
- Menlo/Jan-nano
- prithivMLmods/Vulpecula-4B
- POLARIS-Project/Polaris-4B-Preview
- Tesslate/UIGEN-T3-4B-Preview-MAX
tags:
- merge
- mergekit
- lazymergekit
- Menlo/Jan-nano
- prithivMLmods/Vulpecula-4B
- POLARIS-Project/Polaris-4B-Preview
- Tesslate/UIGEN-T3-4B-Preview-MAX
language:
- en
- pt
pipeline_tag: text-generation
---
# ๐ค quem-4b
**quem-4b** is a 4-billion parameter language model based on the **Qwen3** architecture, created through a balanced merge of four specialized models. This model combines diverse capabilities to offer a robust and versatile conversational experience.
## ๐ Overview
**quem-4b** represents an innovative model merging approach, using the **DARE TIES** technique with perfectly balanced weights among four complementary models. Based on the Qwen3 architecture, it offers excellent performance in conversational and instruction-following tasks.
### ๐ Key Features
- **โ๏ธ Balanced Merge:** Equal weights (25% each) for maximum harmony
- **๐ฏ Qwen3 Base:** Modern and efficient architecture
- **๐ง Multiple Specializations:** Combination of diverse capabilities
- **๐ฌ Conversational:** Optimized for natural interaction
- **๐ Multilingual:** Support for multiple languages
### ๐ง Base Models Used
**quem-4b** is the result of a strategic and balanced merge of the following models:
- **[Menlo/Jan-nano](https://huggingface.co/Menlo/Jan-nano)**
- **[prithivMLmods/Vulpecula-4B](https://huggingface.co/prithivMLmods/Vulpecula-4B)**
- **[POLARIS-Project/Polaris-4B-Preview](https://huggingface.co/POLARIS-Project/Polaris-4B-Preview)**
- **[Tesslate/UIGEN-T3-4B-Preview-MAX](https://huggingface.co/Tesslate/UIGEN-T3-4B-Preview-MAX)**
### ๐ ๏ธ Merge Tool
The merge was performed using **[LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing)**, ensuring a harmonious integration of the different specializations.
## โ๏ธ Technical Configuration
### Merge Parameters
```yaml
models:
- model: Menlo/Jan-nano
parameters:
density: 0.6
weight: 0.25
- model: prithivMLmods/Vulpecula-4B
parameters:
density: 0.6
weight: 0.25
- model: POLARIS-Project/Polaris-4B-Preview
parameters:
density: 0.6
weight: 0.25
- model: Tesslate/UIGEN-T3-4B-Preview-MAX
parameters:
density: 0.6
weight: 0.25
merge_method: dare_ties
base_model: unsloth/Qwen3-4B
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
### Technical Specifications
- **Architecture:** Qwen3 4B
- **Merge Method:** DARE TIES
- **Distribution:** Perfectly balanced (25% each model)
- **Precision:** BFloat16
- **Density:** 0.6 for all components
- **Normalization:** Enabled
- **Int8 Mask:** Enabled
## ๐ป How to Use
### Installing Dependencies
```bash
pip install -qU transformers accelerate torch
```
### Basic Usage Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
# Model configuration
model_name = "rodrigomt/quem-4b"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Conversation example
messages = [
{"role": "user", "content": "What is a large language model?"}
]
# Apply chat template
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Pipeline configuration
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Text generation
outputs = pipeline(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
repetition_penalty=1.1
)
print(outputs[0]["generated_text"])
```
### Usage Example for Different Tasks
```python
# Example 1: General conversation
conversation_prompt = [
{"role": "user", "content": "Explain machine learning for beginners"}
]
# Example 2: Instruction following
instruction_prompt = [
{"role": "user", "content": "Create a list of 5 benefits of artificial intelligence"}
]
# Example 3: Analysis and reasoning
analysis_prompt = [
{"role": "user", "content": "Compare the pros and cons of remote work"}
]
# Example 4: Creativity
creative_prompt = [
{"role": "user", "content": "Write a short poem about technology"}
]
def generate_response(messages, max_tokens=256, temperature=0.7):
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(
prompt,
max_new_tokens=max_tokens,
temperature=temperature,
top_p=0.95,
repetition_penalty=1.1
)
return outputs[0]["generated_text"]
# Test different types of prompts
for prompt_type, messages in [
("Conversation", conversation_prompt),
("Instruction", instruction_prompt),
("Analysis", analysis_prompt),
("Creativity", creative_prompt)
]:
print(f"\n--- {prompt_type} ---")
response = generate_response(messages)
print(response)
```
### Advanced Usage Example with Granular Control
```python
def advanced_generate(
prompt_text,
max_tokens=256,
temperature=0.7,
top_k=50,
top_p=0.95,
repetition_penalty=1.1
):
inputs = tokenizer.encode(prompt_text, return_tensors="pt")
attention_mask = inputs.ne(tokenizer.pad_token_id)
with torch.no_grad():
outputs = model.generate(
inputs,
attention_mask=attention_mask,
max_new_tokens=max_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
early_stopping=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Optimized settings for different scenarios
configs = {
"creative": {"temperature": 0.9, "top_p": 0.95, "repetition_penalty": 1.2},
"analytical": {"temperature": 0.3, "top_k": 30, "repetition_penalty": 1.1},
"conversational": {"temperature": 0.7, "top_p": 0.9, "repetition_penalty": 1.15}
}
# Using the configurations
creative_response = advanced_generate("Tell a story about", **configs["creative"])
analytical_response = advanced_generate("Analyze the data:", **configs["analytical"])
```
## โ ๏ธ System Requirements
### Minimum Configuration
- **RAM:** 16GB
- **VRAM:** 8GB (GPU)
- **Storage:** 20GB available
- **GPU:** GTX 3070, RTX 3060 Ti or higher
### Recommended Configuration
- **RAM:** 32GB
- **VRAM:** 12GB (GPU)
- **GPU:** RTX 4070, RTX 3080 or higher
- **CPU:** Modern multi-core processor
### Optimization Techniques
#### Quantization
```python
# Int8 quantization for memory savings
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
llm_int8_skip_modules=["lm_head"]
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=quantization_config,
device_map="auto"
)
```
#### Acceleration with TensorRT
```python
# For optimized production deployment
import tensorrt_llm
# Specific configuration for a production environment
```
#### Batch Inference
```python
# Batch processing for higher throughput
def batch_generate(prompts_list, batch_size=4):
results = []
for i in range(0, len(prompts_list), batch_size):
batch = prompts_list[i:i+batch_size]
batch_outputs = pipeline(batch, max_new_tokens=256, batch_size=batch_size)
results.extend(batch_outputs)
return results
```
## ๐ง Advanced Settings
### Creativity Control
```python
# Settings for different levels of creativity
creativity_levels = {
"conservative": {"temperature": 0.2, "top_p": 0.8, "top_k": 20},
"balanced": {"temperature": 0.7, "top_p": 0.9, "top_k": 50},
"creative": {"temperature": 1.0, "top_p": 0.95, "top_k": 100}
}
```
### Repetition Prevention
```python
# Techniques to avoid repetition
anti_repetition_config = {
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 3,
"encoder_repetition_penalty": 1.0,
"length_penalty": 1.0
}
```
### Advantages of quem-4b
- **๐ฏ Balanced Merge:** Harmonious combination of specializations
- **๐ง Qwen3 Base:** Modern and efficient architecture
- **๐ก Versatility:** Excellent in multiple tasks
- **โก Efficiency:** Great performance-to-resource ratio
## ๐ License
This model is licensed under the **Apache 2.0 License**.
|
JesseLiu/llama32-3b-kpath-baseline-grpo-lora | JesseLiu | 2025-06-22T04:13:46Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-06-20T00:04:42Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Ailonspace/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lethal_wily_gull | Ailonspace | 2025-06-22T04:13:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lethal wily gull",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T04:13:22Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lethal_wily_gull
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lethal wily gull
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lethal_wily_gull
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ailonspace/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-lethal_wily_gull", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
11marco-antelo1/Hot.Full.video.de.anabel.angus.anabel.angus.camara.de.seguridad.video.filtrado.marco.antelo | 11marco-antelo1 | 2025-06-22T04:11:06Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:10:43Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
VIDEOS-18-zara-dar-Viral-Video-Link/FULL.VIDEO.zara.dar.Viral.Video.Tutorial.Official | VIDEOS-18-zara-dar-Viral-Video-Link | 2025-06-22T04:10:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:10:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
nrmmtr11878/nrmmtrfllfckd2k | nrmmtr11878 | 2025-06-22T04:06:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T03:38:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfllfckd2k
---
# Nrmmtrfllfckd2K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfllfckd2k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfllfckd2k",
"lora_weights": "https://huggingface.co/nrmmtr11878/nrmmtrfllfckd2k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/nrmmtrfllfckd2k', weight_name='lora.safetensors')
image = pipeline('nrmmtrfllfckd2k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/nrmmtrfllfckd2k/discussions) to add images that show off what youโve made with this LoRA.
|
Video-viral-de-Anabel-Angus-y-Marco-Antelo/Completo-FULL.18.VIDEO.DE.ANABEL.ANGUS.Y.MARCO.ANTELO | Video-viral-de-Anabel-Angus-y-Marco-Antelo | 2025-06-22T04:05:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T04:05:14Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
pablo301/mantacanelonesblanca | pablo301 | 2025-06-22T04:04:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-22T04:00:40Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mantacanelonesblanca
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mantacanelonesblanca
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `mantacanelonesblanca` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
hanslab37/poca-SoccerTwos | hanslab37 | 2025-06-22T04:00:42Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2025-06-22T04:00:31Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hanslab37/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
VestaCloset/idm-vton-model | VestaCloset | 2025-06-22T04:00:27Z | 0 | 0 | null | [
"onnx",
"arxiv:2304.10567",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-15T21:03:32Z | ---
title: IDM VTON
emoji: ๐๐๐
colorFrom: yellow
colorTo: red
sdk: gradio
sdk_version: 4.24.0
app_file: app.py
pinned: false
license: cc-by-nc-sa-4.0
short_description: High-fidelity Virtual Try-on
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# IDM-VTON Virtual Try-On System
A complete virtual try-on system based on IDM-VTON, featuring human parsing, pose estimation, and high-quality garment fitting using Stable Diffusion XL.
## ๐ Features
- **Complete Virtual Try-On Pipeline**: End-to-end garment fitting on human images
- **High-Quality Results**: Based on Stable Diffusion XL for realistic outputs
- **Multiple Garment Types**: Support for upper body, lower body, and dresses
- **Web Interface**: Gradio-based UI for easy interaction
- **API Endpoint**: Hugging Face Spaces deployment ready
- **Robust Preprocessing**: Human parsing, pose estimation, and DensePose integration
## ๐๏ธ Architecture
### Core Components
1. **Try-On Pipeline** (`src/tryon_pipeline.py`)
- Main SDXL-based inpainting pipeline
- Custom `tryon()` method for garment fitting
- Integration with all preprocessing components
2. **Custom UNet Models**
- `src/unet_hacked_tryon.py`: Main try-on generation
- `src/unet_hacked_garmnet.py`: Garment feature processing
3. **Preprocessing Pipeline**
- **Human Parsing**: Detectron2-based body segmentation
- **Pose Estimation**: OpenPose keypoint extraction
- **DensePose**: Detailed body surface mapping
- **Mask Generation**: Precise try-on area detection
4. **Web Interface** (`app.py`)
- Gradio-based UI with image upload
- Real-time try-on processing
- Advanced settings for customization
## ๐ฆ Installation
### Prerequisites
- Python 3.8+
- CUDA-compatible GPU (recommended: 16GB+ VRAM)
- Git
### Setup
1. **Clone the repository**:
```bash
git clone <repository-url>
cd idm-tmp
```
2. **Install dependencies**:
```bash
pip install -r requirements.txt
```
3. **Download model weights**:
```bash
# The system will automatically download from yisol/IDM-VTON
# No manual download required
```
## ๐ฏ Usage
### Web Interface
1. **Start the application**:
```bash
python app.py
```
2. **Open your browser** to the provided URL (usually `http://localhost:7860`)
3. **Upload images**:
- **Human Image**: Person wearing clothes
- **Garment Image**: Clothing item to try on
4. **Configure settings**:
- **Garment Description**: Text description of the clothing
- **Auto Parsing**: Enable automatic body segmentation
- **Crop Image**: Auto-crop to 3:4 aspect ratio
- **Denoising Steps**: Quality vs speed trade-off (20-40)
- **Seed**: For reproducible results
5. **Click "Try-on"** to generate the result
### API Usage
The system provides a REST API endpoint:
```python
import requests
# Example API call
response = requests.post(
"https://your-endpoint-url",
json={
"human_img": "https://example.com/person.jpg",
"garm_img": "https://example.com/dress.jpg",
"category": "upper_body" # optional
}
)
# Response contains PNG image bytes
with open("result.png", "wb") as f:
f.write(response.content)
```
## ๐ง Configuration
### Supported Garment Categories
- `upper_body`: T-shirts, shirts, jackets, sweaters
- `lower_body`: Pants, jeans, skirts
- `dresses`: Full-body garments
### Image Requirements
- **Human Image**: Any aspect ratio, will be resized to 768x1024
- **Garment Image**: Will be resized to 768x1024
- **Format**: PNG, JPEG, or other common formats
- **Quality**: Higher resolution inputs produce better results
### Performance Settings
- **Denoising Steps**: 20-40 (higher = better quality, slower)
- **Guidance Scale**: 7.5 (default, good balance)
- **Seed**: Set for reproducible results
## ๐ Deployment
### Hugging Face Spaces
1. **Create a new Space** on Hugging Face
2. **Upload your code** to the repository
3. **Configure the Space**:
- **SDK**: Gradio
- **Hardware**: GPU (T4 or better recommended)
- **Python Version**: 3.8+
4. **Deploy** - the system will automatically:
- Install dependencies from `requirements.txt`
- Download model weights on first run
- Start the web interface
### Production Deployment
For production use, consider:
1. **Hardware Requirements**:
- **GPU**: 16GB+ VRAM (A100, V100, or similar)
- **RAM**: 32GB+ system memory
- **Storage**: 50GB+ for models and cache
2. **Performance Optimization**:
- Enable XFormers for faster attention
- Use batch processing for multiple requests
- Implement caching for repeated requests
3. **Monitoring**:
- Track inference times
- Monitor GPU memory usage
- Set up error logging
## ๐ Troubleshooting
### Common Issues
1. **Import Errors**:
```bash
# Ensure all dependencies are installed
pip install -r requirements.txt
```
2. **CUDA Out of Memory**:
- Reduce image resolution
- Lower denoising steps
- Use smaller batch sizes
3. **Model Loading Issues**:
- Check internet connection for model downloads
- Verify sufficient disk space
- Ensure CUDA compatibility
4. **Preprocessing Errors**:
- Verify Detectron2 installation
- Check OpenPose dependencies
- Ensure DensePose models are available
### Performance Tips
- **Use XFormers**: Automatically enabled for faster attention
- **Optimize Images**: Pre-resize large images to 768x1024
- **Batch Processing**: Process multiple requests together
- **Caching**: Cache model outputs for repeated inputs
## ๐ Performance
### Typical Performance (RTX 4090)
- **Model Loading**: ~30 seconds (first time)
- **Inference Time**: ~5-10 seconds per image
- **Memory Usage**: ~12-15GB GPU memory
- **Output Quality**: High-resolution 768x1024 images
### Scaling Considerations
- **Concurrent Requests**: Limited by GPU memory
- **Batch Processing**: Can handle multiple images simultaneously
- **Caching**: Model stays loaded between requests
## ๐ค Contributing
1. **Fork the repository**
2. **Create a feature branch**
3. **Make your changes**
4. **Add tests** if applicable
5. **Submit a pull request**
## ๐ License
This project is based on IDM-VTON research. Please refer to the original paper and repository for licensing information.
## ๐ Acknowledgments
- **IDM-VTON Authors**: Original research and model
- **Hugging Face**: Diffusers library and Spaces platform
- **Detectron2**: Human parsing implementation
- **OpenPose**: Pose estimation framework
- **DensePose**: Body surface mapping
## ๐ References
- [IDM-VTON Paper](https://arxiv.org/abs/2304.10567)
- [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
- [Diffusers Library](https://github.com/huggingface/diffusers)
- [Detectron2](https://github.com/facebookresearch/detectron2)
- [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) |
18-Marco-Antelo-Video-completo-link/CC.CAMERA.NEW.VIDEO.anabel.angus.y.marco.antelo.filtrado.viral.On.Social.Media.Link | 18-Marco-Antelo-Video-completo-link | 2025-06-22T03:58:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T03:58:07Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
mob2711/llama_3b_1k | mob2711 | 2025-06-22T03:56:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T03:56:15Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aaa99922/Ayuwoki | aaa99922 | 2025-06-22T03:53:20Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-22T03:51:53Z | ---
license: other
license_name: flux-1-dev-non-commercial
license_link: https://weights.gg/license/flux
---
|
appledora/recast3.1-G8W32H8 | appledora | 2025-06-22T03:53:13Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"recast8b_llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-16T02:50:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rodrigomt/gama-12b | rodrigomt | 2025-06-22T03:51:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"merge",
"gemma",
"text-generation",
"conversational",
"allura-org/Gemma-3-Glitter-12B",
"soob3123/amoral-gemma3-12B-v2-qat",
"soob3123/Veiled-Calla-12B",
"en",
"pt",
"base_model:allura-org/Gemma-3-Glitter-12B",
"base_model:merge:allura-org/Gemma-3-Glitter-12B",
"base_model:soob3123/Veiled-Calla-12B",
"base_model:merge:soob3123/Veiled-Calla-12B",
"base_model:soob3123/amoral-gemma3-12B-v2-qat",
"base_model:merge:soob3123/amoral-gemma3-12B-v2-qat",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T03:02:22Z | ---
base_model:
- allura-org/Gemma-3-Glitter-12B
- soob3123/amoral-gemma3-12B-v2-qat
- soob3123/Veiled-Calla-12B
library_name: transformers
tags:
- merge
- gemma
- text-generation
- conversational
- allura-org/Gemma-3-Glitter-12B
- soob3123/amoral-gemma3-12B-v2-qat
- soob3123/Veiled-Calla-12B
license: gemma
language:
- en
- pt
pipeline_tag: text-generation
---
# ๐ค gama-12b
**gama-12b** is a 12-billion parameter language model created through the strategic merge of multiple specialized models. This model combines the capabilities of different architectures to offer a more robust and versatile conversational experience.
## ๐ Overview
This model was developed using the **DARE TIES** (Drop And REscale with Ties-Elimination) technique, an advanced model merging methodology that allows for the efficient combination of different specializations into a single cohesive model.
### ๐ง Base Models Used
**gama-12b** is the result of merging the following models:
- **[soob3123/amoral-gemma3-12B-v2-qat](https://huggingface.co/soob3123/amoral-gemma3-12B-v2-qat)**
- **[allura-org/Gemma-3-Glitter-12B](https://huggingface.co/allura-org/Gemma-3-Glitter-12B)**
- **[soob3123/Veiled-Calla-12B](https://huggingface.co/soob3123/Veiled-Calla-12B)**
### ๐ ๏ธ Merge Tool
The merge was performed using **[LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing)**, a tool that facilitates the process of merging language models.
## โ๏ธ Technical Configuration
### Merge Parameters
```yaml
models:
- model: soob3123/amoral-gemma3-12B-v2-qat
parameters:
density: 0.6
weight: 0.33
- model: allura-org/Gemma-3-Glitter-12B
parameters:
density: 0.6
weight: 0.33
- model: soob3123/Veiled-Calla-12B
parameters:
density: 0.6
weight: 0.34
merge_method: dare_ties
base_model: unsloth/gemma-3-12b-it-qat
parameters:
normalize: true
int8_mask: true
device: auto
dtype: float16
```
### Technical Specifications
- **Architecture:** Gemma-3 12B
- **Merge Method:** DARE TIES
- **Precision:** Float16
- **Quantization:** QAT (Quantization Aware Training)
- **Normalization:** Enabled
- **Int8 Mask:** Enabled
## ๐ป How to Use
### Installing Dependencies
```bash
pip install -qU transformers accelerate torch
```
### Basic Usage Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
# Model configuration
model_name = "rodrigomt/gama-12b"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# Prepare the message
messages = [
{"role": "user", "content": "What is a large language model?"}
]
# Apply chat template
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Pipeline configuration
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.float16,
device_map="auto",
)
# Text generation
outputs = pipeline(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
repetition_penalty=1.1
)
print(outputs[0]["generated_text"])
```
### Advanced Usage Example
```python
# For more granular control
inputs = tokenizer.encode(prompt, return_tensors="pt")
attention_mask = inputs.ne(tokenizer.pad_token_id)
with torch.no_grad():
outputs = model.generate(
inputs,
attention_mask=attention_mask,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## ๐ฏ Key Features
- **Versatility:** Combines capabilities from multiple specialized models
- **Efficiency:** Optimized with QAT quantization for better performance
- **Compatibility:** Fully compatible with the Transformers library
- **Scalability:** Supports deployment on different hardware configurations
## โ ๏ธ System Requirements
### Recommended Minimums
- **RAM:** 32GB
- **VRAM:** 24GB (GPU)
- **Storage:** 50GB available
### Ideal Configuration
- **RAM:** 64GB+
- **VRAM:** 40GB+ (GPU)
- **GPU:** A6000, A100, or higher
## ๐ License
This model is licensed under the **Gemma License**. |
18-Videos-Pakcricketinfo-Sapna-Shah-viral/FULL.VIDEO.LINK.Pakcricketinfo.Sapna.Shah.Viral.Video.Tutorial.Official | 18-Videos-Pakcricketinfo-Sapna-Shah-viral | 2025-06-22T03:51:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T03:51:24Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/npw8at8u?Njei"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a> |
ljnlonoljpiljm/webssl-mae300m-full2b-224-like-dislike | ljnlonoljpiljm | 2025-06-22T03:51:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-22T03:51:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/Qwen3-14B-4bit-AWQ | mlx-community | 2025-06-22T03:49:43Z | 1,907 | 3 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-06T15:22:57Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-14B
tags:
- mlx
---
# mlx-community/Qwen3-14B-4bit-AWQ
This model [mlx-community/Qwen3-14B-4bit-AWQ](https://huggingface.co/mlx-community/Qwen3-14B-4bit-AWQ) was
converted to MLX format from [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B)
using mlx-lm version **0.25.2**.
AWQ Parameters: --bits 4 --group-size 64 --embed-bits 4 --embed-group-size 32 --num-samples 256 --sequence-length 1024 --n-grid 50
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-14B-4bit-AWQ")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Official-job-guru-online-18-viral-videos/FULL.VIDEO.job.guru.online.Viral.Video.Tutorial.Official | Official-job-guru-online-18-viral-videos | 2025-06-22T03:45:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-22T03:44:45Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Subsets and Splits