modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tim-lawson/fineweb-baseline-8-layers-v0 | tim-lawson | 2025-05-24T09:21:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T06:22:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VIDEO-18-Pastors-daughter-Viral-Video/FULL.VIDEO.LINK.Pastors.daughter.Viral.Video.Leaks.Official | VIDEO-18-Pastors-daughter-Viral-Video | 2025-05-24T09:19:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T09:19:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
FULL-VIDEO-18-Katrina-Lim-Viral-Kiffy/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | FULL-VIDEO-18-Katrina-Lim-Viral-Kiffy | 2025-05-24T09:13:51Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T09:13:27Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
babaongu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose | babaongu | 2025-05-24T09:04:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am reclusive hardy mongoose",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-03T04:10:03Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am reclusive hardy mongoose
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="babaongu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sergioalves/759b48bd-76fe-46d8-baf9-4db786486b72 | sergioalves | 2025-05-24T08:58:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:quantized:berkeley-nest/Starling-LM-7B-alpha",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-24T08:44:25Z | ---
base_model: berkeley-nest/Starling-LM-7B-alpha
library_name: transformers
model_name: 759b48bd-76fe-46d8-baf9-4db786486b72
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 759b48bd-76fe-46d8-baf9-4db786486b72
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/759b48bd-76fe-46d8-baf9-4db786486b72", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/jxwsqapn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
omarwaleed523/roberta-base-pan-clef-subtask2 | omarwaleed523 | 2025-05-24T08:54:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-23T20:56:43Z | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-pan-clef-subtask2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-pan-clef-subtask2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4936
- Micro F1: 0.5654
- Macro F1: 0.6413
- Macro Recall: 0.7827
- Accuracy: 0.5654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | Macro Recall | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------------:|:--------:|
| 0.1032 | 1.0 | 4515 | 3.0365 | 0.5613 | 0.6005 | 0.7793 | 0.5613 |
| 0.0501 | 1.9997 | 9028 | 3.4936 | 0.5654 | 0.6413 | 0.7827 | 0.5654 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
robertou2/task-9-microsoft-Phi-3.5-mini-instruct | robertou2 | 2025-05-24T08:38:11Z | 1,393 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
]
| null | 2025-05-13T16:57:03Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS | 2025-05-24T08:37:24Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T08:37:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
New-Bollywood-Actress-Viral-Video/FULL.VIDEO.LINK.Bollywood.Actress.Viral.Video.Leaks.Official | New-Bollywood-Actress-Viral-Video | 2025-05-24T08:35:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T08:35:08Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
kitten-kitkat/unsloth-qwen14b | kitten-kitkat | 2025-05-24T08:30:03Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"unsloth",
"license:mit",
"region:us"
]
| null | 2025-05-24T08:01:15Z | ---
license: mit
tags:
- unsloth
---
|
tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23 | tscstudios | 2025-05-24T08:23:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-24T08:23:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Qymi0Imrdzzj3Ryhdijwarixgri1_9105Ff9D 108F 49F1 8359 502893E0Ce23
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23/discussions) to add images that show off what you’ve made with this LoRA.
|
mohhtl/e3dd1b39-d1ee-4b83-bf8a-05a7164b5711 | mohhtl | 2025-05-24T08:21:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:29100ea8-8f91-45bb-841b-4a7bf8ea8b43_test.json",
"dataset:29100ea8-8f91-45bb-841b-4a7bf8ea8b43_synth.json",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-24T08:21:18Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- generated_from_trainer
datasets:
- 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_test.json
- 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_synth.json
model-index:
- name: results/e3dd1b39-d1ee-4b83-bf8a-05a7164b5711
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
dataset_prepared_path: results/29100ea8-8f91-45bb-841b-4a7bf8ea8b43_last_run_prepared
datasets:
- path: 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_test.json
type: &id001
field: null
field_input: input
field_instruction: instruct
field_output: output
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
- path: 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_synth.json
type: *id001
flash_attention: true
gradient_accumulation_steps: 4
gradient_checkpointing: true
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_model_dir: null
lora_r: 8
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
loss_watchdog_patience: 3
loss_watchdog_threshold: 5.0
lr_scheduler: constant
micro_batch_size: 2
num_epochs: 15
optimizer: adamw_8bit
output_dir: results/e3dd1b39-d1ee-4b83-bf8a-05a7164b5711
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
save_total_limit: 1
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
test_datasets:
- path: 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_test.json
split: train
type: *id001
- path: 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_synth.json
split: train
type: *id001
tf32: false
val_set_size: 0.0
wandb_entity: null
wandb_log_model: null
wandb_name: null
wandb_project: null
wandb_watch: null
warmup_ratio: 0.0
warmup_steps: 0
weight_decay: 0.0
```
</details><br>
# results/e3dd1b39-d1ee-4b83-bf8a-05a7164b5711
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_test.json and the 29100ea8-8f91-45bb-841b-4a7bf8ea8b43_synth.json datasets.
It achieves the following results on the evaluation set:
- Loss: 0.0072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.3763 | 1.0 | 32 | 0.3526 |
| 0.2686 | 2.0 | 64 | 0.2226 |
| 0.1961 | 3.0 | 96 | 0.1569 |
| 0.1214 | 4.0 | 128 | 0.1078 |
| 0.1049 | 5.0 | 160 | 0.0714 |
| 0.0793 | 6.0 | 192 | 0.0390 |
| 0.04 | 7.0 | 224 | 0.0225 |
| 0.0259 | 8.0 | 256 | 0.0159 |
| 0.0162 | 9.0 | 288 | 0.0109 |
| 0.0176 | 10.0 | 320 | 0.0112 |
| 0.0283 | 11.0 | 352 | 0.0087 |
| 0.011 | 12.0 | 384 | 0.0089 |
| 0.0198 | 13.0 | 416 | 0.0075 |
| 0.0141 | 14.0 | 448 | 0.0065 |
| 0.0107 | 14.5354 | 465 | 0.0072 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
mlx-community/AceReason-Nemotron-14B-bf16 | mlx-community | 2025-05-24T08:09:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:finetune:nvidia/AceReason-Nemotron-14B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T08:08:00Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- mlx
- mlx-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# mlx-community/AceReason-Nemotron-14B-bf16
The Model [mlx-community/AceReason-Nemotron-14B-bf16](https://huggingface.co/mlx-community/AceReason-Nemotron-14B-bf16) was converted to MLX format from [nvidia/AceReason-Nemotron-14B](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/AceReason-Nemotron-14B-bf16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
manohar-lal-dhakad-mms/manohar.lal.dhakad.mms.manohar.lal.dhakad.viral.video | manohar-lal-dhakad-mms | 2025-05-24T08:08:29Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T08:07:08Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?mm">🔴 CLICK HERE 🌐==►► Download Now)</a> |
mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF | mradermacher | 2025-05-24T06:19:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator",
"base_model:quantized:Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-24T06:10:52Z | ---
base_model: Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
keko24/Qwen3-0.6B-SFT-Tulu-MathCodeSciTable | keko24 | 2025-05-24T06:18:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T06:17:39Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF | mradermacher | 2025-05-24T06:10:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Ar4ikov/gpt2-medium-2-stable-diffusion-prompt-generator",
"base_model:quantized:Ar4ikov/gpt2-medium-2-stable-diffusion-prompt-generator",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:51:33Z | ---
base_model: Ar4ikov/gpt2-medium-2-stable-diffusion-prompt-generator
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ar4ikov/gpt2-medium-2-stable-diffusion-prompt-generator
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medium-2-stable-diffusion-prompt-generator-GGUF/resolve/main/gpt2-medium-2-stable-diffusion-prompt-generator.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/aaronGPTalpha-i1-GGUF | mradermacher | 2025-05-24T06:10:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:totallynotbrent/aaronGPTalpha",
"base_model:quantized:totallynotbrent/aaronGPTalpha",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-24T06:05:11Z | ---
base_model: totallynotbrent/aaronGPTalpha
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/totallynotbrent/aaronGPTalpha
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/aaronGPTalpha-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF/resolve/main/aaronGPTalpha.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
amirahav/amir | amirahav | 2025-05-24T06:09:12Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-24T05:23:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV72 | RobertoSonic | 2025-05-24T06:09:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-24T05:36:22Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-tiny-patch4-window8-256-dmae-humeda-DAV72
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-dmae-humeda-DAV72
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7140
- Accuracy: 0.8743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.0851 | 1.0 | 15 | 1.0597 | 0.4857 |
| 0.9595 | 2.0 | 30 | 0.9427 | 0.64 |
| 0.8642 | 3.0 | 45 | 0.6913 | 0.7086 |
| 0.5836 | 4.0 | 60 | 0.5780 | 0.7257 |
| 0.5395 | 5.0 | 75 | 0.4822 | 0.7829 |
| 0.4215 | 6.0 | 90 | 0.4077 | 0.8229 |
| 0.4329 | 7.0 | 105 | 0.4352 | 0.8114 |
| 0.3695 | 8.0 | 120 | 0.3244 | 0.8743 |
| 0.3314 | 9.0 | 135 | 0.3186 | 0.8914 |
| 0.3176 | 10.0 | 150 | 0.3788 | 0.8514 |
| 0.3368 | 11.0 | 165 | 0.3458 | 0.8629 |
| 0.2558 | 12.0 | 180 | 0.4196 | 0.8457 |
| 0.2579 | 13.0 | 195 | 0.3485 | 0.8743 |
| 0.2413 | 14.0 | 210 | 0.4509 | 0.8629 |
| 0.2374 | 15.0 | 225 | 0.3904 | 0.8743 |
| 0.2214 | 16.0 | 240 | 0.3461 | 0.8514 |
| 0.2189 | 17.0 | 255 | 0.5986 | 0.8229 |
| 0.2458 | 18.0 | 270 | 0.3360 | 0.8914 |
| 0.2431 | 19.0 | 285 | 0.3475 | 0.8857 |
| 0.2136 | 20.0 | 300 | 0.3242 | 0.88 |
| 0.1871 | 21.0 | 315 | 0.4103 | 0.8857 |
| 0.1996 | 22.0 | 330 | 0.3606 | 0.9029 |
| 0.1367 | 23.0 | 345 | 0.4657 | 0.8629 |
| 0.1963 | 24.0 | 360 | 0.4267 | 0.8743 |
| 0.1519 | 25.0 | 375 | 0.4322 | 0.8686 |
| 0.1365 | 26.0 | 390 | 0.4214 | 0.88 |
| 0.1158 | 27.0 | 405 | 0.4472 | 0.8743 |
| 0.1621 | 28.0 | 420 | 0.4020 | 0.8743 |
| 0.1271 | 29.0 | 435 | 0.4054 | 0.8857 |
| 0.136 | 30.0 | 450 | 0.4286 | 0.9143 |
| 0.1386 | 31.0 | 465 | 0.5015 | 0.8857 |
| 0.1153 | 32.0 | 480 | 0.6675 | 0.8629 |
| 0.1139 | 33.0 | 495 | 0.5458 | 0.8971 |
| 0.144 | 34.0 | 510 | 0.5303 | 0.88 |
| 0.1542 | 35.0 | 525 | 0.5164 | 0.8914 |
| 0.1208 | 36.0 | 540 | 0.5690 | 0.88 |
| 0.1034 | 37.0 | 555 | 0.7427 | 0.8571 |
| 0.0889 | 38.0 | 570 | 0.9084 | 0.8286 |
| 0.1355 | 39.0 | 585 | 0.5977 | 0.8743 |
| 0.0895 | 40.0 | 600 | 0.5400 | 0.8914 |
| 0.1072 | 41.0 | 615 | 0.6018 | 0.8743 |
| 0.1356 | 42.0 | 630 | 0.5493 | 0.8743 |
| 0.0953 | 43.0 | 645 | 0.5350 | 0.8914 |
| 0.0781 | 44.0 | 660 | 0.5269 | 0.88 |
| 0.0854 | 45.0 | 675 | 0.5428 | 0.88 |
| 0.0983 | 46.0 | 690 | 0.4897 | 0.8857 |
| 0.0944 | 47.0 | 705 | 0.5177 | 0.8971 |
| 0.1152 | 48.0 | 720 | 0.6401 | 0.8629 |
| 0.0608 | 49.0 | 735 | 0.7380 | 0.8629 |
| 0.0898 | 50.0 | 750 | 0.4922 | 0.8971 |
| 0.0923 | 51.0 | 765 | 0.5427 | 0.8971 |
| 0.0743 | 52.0 | 780 | 0.9941 | 0.84 |
| 0.0753 | 53.0 | 795 | 0.5342 | 0.8857 |
| 0.0751 | 54.0 | 810 | 0.6452 | 0.88 |
| 0.1222 | 55.0 | 825 | 0.6297 | 0.8743 |
| 0.0786 | 56.0 | 840 | 0.6592 | 0.8629 |
| 0.134 | 57.0 | 855 | 0.6541 | 0.8686 |
| 0.092 | 58.0 | 870 | 0.6523 | 0.8571 |
| 0.1036 | 59.0 | 885 | 0.5562 | 0.8971 |
| 0.0825 | 60.0 | 900 | 0.6117 | 0.8743 |
| 0.0923 | 61.0 | 915 | 0.5778 | 0.8686 |
| 0.0909 | 62.0 | 930 | 0.5974 | 0.8686 |
| 0.0536 | 63.0 | 945 | 0.7557 | 0.8514 |
| 0.0572 | 64.0 | 960 | 0.6255 | 0.8857 |
| 0.0824 | 65.0 | 975 | 0.6768 | 0.8686 |
| 0.0773 | 66.0 | 990 | 0.5942 | 0.9029 |
| 0.0495 | 67.0 | 1005 | 0.7902 | 0.8571 |
| 0.0649 | 68.0 | 1020 | 0.6097 | 0.8914 |
| 0.0852 | 69.0 | 1035 | 0.6614 | 0.8914 |
| 0.0634 | 70.0 | 1050 | 0.6604 | 0.8914 |
| 0.0774 | 71.0 | 1065 | 0.7848 | 0.8514 |
| 0.0803 | 72.0 | 1080 | 0.6424 | 0.8914 |
| 0.0645 | 73.0 | 1095 | 0.7508 | 0.8857 |
| 0.0483 | 74.0 | 1110 | 0.7523 | 0.8629 |
| 0.0586 | 75.0 | 1125 | 0.8278 | 0.8629 |
| 0.1 | 76.0 | 1140 | 0.7503 | 0.8686 |
| 0.0434 | 77.0 | 1155 | 0.7820 | 0.8743 |
| 0.0792 | 78.0 | 1170 | 0.7016 | 0.88 |
| 0.055 | 79.0 | 1185 | 0.8635 | 0.8571 |
| 0.0666 | 80.0 | 1200 | 0.7208 | 0.8686 |
| 0.0563 | 81.0 | 1215 | 0.7606 | 0.8686 |
| 0.0535 | 82.0 | 1230 | 0.7329 | 0.88 |
| 0.0499 | 83.0 | 1245 | 0.7253 | 0.88 |
| 0.0418 | 84.0 | 1260 | 0.7429 | 0.8686 |
| 0.0736 | 85.0 | 1275 | 0.7621 | 0.8743 |
| 0.0593 | 86.0 | 1290 | 0.7970 | 0.8571 |
| 0.0658 | 87.0 | 1305 | 0.7211 | 0.8686 |
| 0.0531 | 88.0 | 1320 | 0.7420 | 0.8686 |
| 0.0604 | 89.0 | 1335 | 0.7151 | 0.8743 |
| 0.0661 | 90.0 | 1350 | 0.6881 | 0.8857 |
| 0.058 | 91.0 | 1365 | 0.7139 | 0.8686 |
| 0.0436 | 92.0 | 1380 | 0.7260 | 0.8686 |
| 0.0733 | 93.0 | 1395 | 0.7150 | 0.8743 |
| 0.0501 | 93.3390 | 1400 | 0.7140 | 0.8743 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.19.0
- Tokenizers 0.21.1
|
mradermacher/aaronGPTalpha-GGUF | mradermacher | 2025-05-24T06:08:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:totallynotbrent/aaronGPTalpha",
"base_model:quantized:totallynotbrent/aaronGPTalpha",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:57:27Z | ---
base_model: totallynotbrent/aaronGPTalpha
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/totallynotbrent/aaronGPTalpha
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aaronGPTalpha-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/aaronGPTalpha-GGUF/resolve/main/aaronGPTalpha.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mci29/sn29_s2m7_cpou | mci29 | 2025-05-24T05:54:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T05:50:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DA-ctrl-bot-GGUF | mradermacher | 2025-05-24T05:51:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:imumtozee/DA-ctrl-bot",
"base_model:quantized:imumtozee/DA-ctrl-bot",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T18:40:08Z | ---
base_model: imumtozee/DA-ctrl-bot
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/imumtozee/DA-ctrl-bot
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DA-ctrl-bot-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DA-ctrl-bot-GGUF/resolve/main/DA-ctrl-bot.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
huangqishan/NNModel | huangqishan | 2025-05-24T05:48:57Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"NNModel",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| image-classification | 2025-05-24T03:35:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep5_66 | MinaMila | 2025-05-24T05:48:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T05:48:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp2 | nis12ram | 2025-05-24T05:42:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nemotron",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct",
"base_model:finetune:nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T05:35:08Z | ---
base_model: nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- nemotron
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nis12ram
- **License:** apache-2.0
- **Finetuned from model :** nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
This nemotron model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep2_66 | MinaMila | 2025-05-24T05:37:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T05:37:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
7Dragons/prime_1 | 7Dragons | 2025-05-24T05:37:40Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-24T05:31:20Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
duydc/qwen-2.5-7b-formal-alpaca-instruct-2452025 | duydc | 2025-05-24T05:36:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T05:25:42Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen-2.5-7b-formal-alpaca-instruct-2452025
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-2.5-7b-formal-alpaca-instruct-2452025
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="duydc/qwen-2.5-7b-formal-alpaca-instruct-2452025", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/duydc/huggingface/runs/nny8kzrz)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
watch-katrina-lim-kiffy-full-origin/Firecnt-Katrina-Lim-Viral-Video-WhGoing-On-katrina-lim-viral-kiffy-video-telegram-link | watch-katrina-lim-kiffy-full-origin | 2025-05-24T05:34:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T05:33:54Z | Watch 🟢 ➤ ➤ ➤ <a href="https://witvidz.com/originalviralvideo"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://witvidz.com/originalviralvideo"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤

|
DAKARA555/side | DAKARA555 | 2025-05-24T05:27:31Z | 16 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2025-05-19T19:57:22Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/white.png
base_model: Wan-AI/Wan2.1-I2V-14B-480P
instance_prompt: null
license: apache-2.0
---
# side
<Gallery />
## Model description
https://civitai.com/models/1361682/side-lying-sex-wan-i2v-14b
https://huggingface.co/DAKARA555/side/resolve/main/P001-SideSex-Wan-i2v-v10-000010_converted.safetensors?download=true
## Download model
Weights for this model are available in Safetensors format.
[Download](/DAKARA555/side/tree/main) them in the Files & versions tab.
|
DAKARA555/deepfera | DAKARA555 | 2025-05-24T05:11:30Z | 65 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2025-05-14T16:36:06Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/white.png
base_model: Wan-AI/Wan2.1-I2V-14B-480P
instance_prompt: null
license: apache-2.0
---
# deepfera
<Gallery />
## Model description
https://civitai.com/models/1395313/wan-dr34mjob-doublesinglehandy-blowjob?modelVersionId=1610465
https://huggingface.co/DAKARA555/deepfera/resolve/main/WAN_dr34mj0b.safetensors?download=true
## Download model
Weights for this model are available in Safetensors format.
[Download](/DAKARA555/deepfera/tree/main) them in the Files & versions tab.
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep4_55 | MinaMila | 2025-05-24T05:10:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T05:10:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Watchkatrinalim/Watch.katrina.lim.kiffy.full.original.viral.leaked.video | Watchkatrinalim | 2025-05-24T05:03:26Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T05:02:35Z | Watch 🟢 ➤ ➤ ➤ <a href="https://viraltrendzzz.com/sdvsdvdd"> 🌐 Click Here To link (Watch.katrina.lim.kiffy.full.original.viral.leaked.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://viraltrendzzz.com/sdvsdvdd"> 🌐 Watch.katrina.lim.kiffy.full.original.viral.leaked.video
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep2_55 | MinaMila | 2025-05-24T05:03:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T05:03:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb1p7c3q05s4u1cgo5i612ud_cmb1pm94j05sfu1cgscappdan | BootesVoid | 2025-05-24T04:43:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-24T04:43:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: eve
---
# Cmb1P7C3Q05S4U1Cgo5I612Ud_Cmb1Pm94J05Sfu1Cgscappdan
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `eve` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "eve",
"lora_weights": "https://huggingface.co/BootesVoid/cmb1p7c3q05s4u1cgo5i612ud_cmb1pm94j05sfu1cgscappdan/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb1p7c3q05s4u1cgo5i612ud_cmb1pm94j05sfu1cgscappdan', weight_name='lora.safetensors')
image = pipeline('eve').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb1p7c3q05s4u1cgo5i612ud_cmb1pm94j05sfu1cgscappdan/discussions) to add images that show off what you’ve made with this LoRA.
|
zaydzuhri/myopic-340M-4096-model | zaydzuhri | 2025-05-24T04:42:50Z | 0 | 0 | null | [
"safetensors",
"transformer",
"region:us"
]
| null | 2025-05-24T04:39:59Z | <div align="center">
# 🔥 Flame: Flash Linear Attention Made Easy
</div>
Welcome to 🔥 `flame`, a minimal and efficient framework built on `torchtitan` for training Flash Linear Attention (FLA) models (and more broadly, arbitrary autoregressive language models) with blazing efficiency.
**Feature Highlights:**
- 🚀 Minimal, easy-to-use, extensible training framework
- 🤗 Seamless integration with `fla` and `transformers`
- 🔄 Zero-cost data preprocessing: online tokenization, dataset shuffling, and multiple datasets support
- 🔮 4D parallelism (coming soon)
## Setup
To get started, clone the `flame` repository and install the required dependencies:
```bash
git clone https://github.com/fla-org/flame.git
cd flame
pip install .
```
`flame` manages minimal dependencies, only including `fla` and `torchtitan` as submodules.
After installation, initialize and update the submodules:
```sh
git submodule update --init --recursive
```
## Dataset Preparation
To download the dataset to your local disk, create a new Python file with the following content and execute it:
```py
from datasets import load_dataset
# load fineweb-edu with parallel processing
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="default", num_proc=64, cache_dir="/your/cache/path")
# or load a subset with roughly 100B tokens, suitable for small- or medium-sized experiments
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="sample-100BT", num_proc=64, cache_dir="/your/cache/path")
```
## Training Recipes
Here's an example of training a 340M FLA Transformer model with a LLaMA-like architecture from scratch on a 100BT subset of the Fineweb-edu corpus in streaming mode.
> [!WARNING]
> If the dataset is not downloaded beforehand, the streaming mode will attempt to fetch it from a remote server and download it on-the-fly, which can be highly unstable during training due to network issues.
> For stable training, ensure the dataset is downloaded locally (see [**Dataset Preparation**](#dataset-preparation)). Otherwise, we assume you are only testing the new corpus.
```sh
bash train.sh \
--job.config_file flame/models/fla.toml \
--job.dump_folder exp/transformer-340M-4K-10B/batch1.seqlen65536.context4096.warmup1024.update1.steps20480.lr3e-4.cosine \
--model.config configs/transformer_340M.json \
--model.tokenizer_path fla-hub/transformer-1.3B-100B \
--optimizer.name AdamW \
--optimizer.eps 1e-15 \
--optimizer.lr 3e-4 \
--lr_scheduler.warmup_steps 1024 \
--lr_scheduler.lr_min 0.1 \
--lr_scheduler.decay_type cosine \
--training.batch_size 1 \
--training.seq_len 65536 \
--training.context_len 4096 \
--training.varlen \
--training.gradient_accumulation_steps 1 \
--training.steps 20480 \
--training.max_norm 1.0 \
--training.skip_nan_inf \
--training.dataset HuggingFaceFW/fineweb-edu \
--training.dataset_name sample-100BT \
--training.dataset_split train \
--training.streaming \
--training.num_workers 32 \
--training.prefetch_factor 2 \
--training.seed 42 \
--training.compile \
--checkpoint.interval 2048 \
--checkpoint.load_step -1 \
--checkpoint.keep_latest_k 2 \
--metrics.log_freq 1
```
You can specify the number of GPUs by setting the environment variable `NGPU`, which defaults to 8.
**For single-GPU debugging, set `NGPU=1`.**
We provide several [config files](https://github.com/fla-org/flame/tree/main/configs) for different models.
By default, the learning rate is set to 3e-4 with a cosine scheduler. Other schedulers, such as WSD (wsd), are also supported.
**Key parameters:**
- `--lr_scheduler.decay_ratio`: The proportion of the steps allocated to the decay phase. The learning rate will remain stable after the warmup period and only start decaying during the last `decay_ratio` portion of the total training steps, which is known as the Warmup-Stable-Decay (WSD) schedule.
- `--lr_scheduler.warmup_steps`: The number of steps for the learning rate warmup phase.
- `--training.steps`: Total number of training steps.
- `--training.batch_size`: Batch size per device, must be 1 if `--training.varlen` is set.
- `--training.seq_len`: The length of each sequence in the batch, which is concatenated from multiple samples.
- `--training.context_len`: The max allowed length of a sample. For non-varlen mode, this is equivalent to `seq_len`.
- `--training.varlen`: Whether to conduct variable-length sequence training.
- `--training.gradient_accumulation_steps`: Number of gradient accumulation steps.
> [!WARNING]
> The total number of tokens processed per batch, referred to as `global_batch_size`, is calculated as batch_size × gradient_accumulation_steps × num_gpus.
> Each step processes `global_batch_size * seq_len` tokens.
> Monitor the value of `global_batch_size`, `warmup_steps`, and `steps` carefully when modifying any of the hyperparameters!
For a detailed explanation of all parameters, run:
```sh
bash train.sh -h
```
<details>
<summary>Usage</summary>
```py
options:
-h, --help show this help message and exit
--job.config_file JOB.CONFIG_FILE
Job config file
--job.dump_folder JOB.DUMP_FOLDER
Folder to dump job outputs
--job.description JOB.DESCRIPTION
Description of the job
--job.use_for_integration_test
Add this config to the integration test suite
--job.print_args Print the args to terminal
--model.config MODEL.CONFIG
Path to the model config
--model.norm_type MODEL.NORM_TYPE
Type of layer normalization to use [layernorm,
np_layernorm, rmsnorm, fused_rmsnorm]
--model.tokenizer_path MODEL.TOKENIZER_PATH
Tokenizer path
--profiling.enable_profiling
Whether to enable pytorch profiler
--profiling.save_traces_folder PROFILING.SAVE_TRACES_FOLDER
Trace files location
--profiling.profile_freq PROFILING.PROFILE_FREQ
How often to collect profiler traces, in iterations
--profiling.enable_memory_snapshot
Whether to dump memory snapshot
--profiling.save_memory_snapshot_folder PROFILING.SAVE_MEMORY_SNAPSHOT_FOLDER
Memeory snapshot files location
--optimizer.name OPTIMIZER.NAME
Optimizer to use
--optimizer.eps OPTIMIZER.EPS
Epsilon value for the optimizer.
--optimizer.fused Whether the fused implementation(CUDA only) is used.
--optimizer.scheduler {wsd,cosine,linear}
Scheduler to use. Currently supported: wsd, cosine,
and linear.
--optimizer.lr OPTIMIZER.LR
Learning rate to use
--optimizer.min_lr_ratio OPTIMIZER.MIN_LR_RATIO
Min lr ratio for lr scheduler
--optimizer.early_step_in_backward
Whether to apply optimizer in the backward. Caution,
optimizer_in_backward is not compatible with gradients
clipping, users should not call
register_post_accumulate_grad_hook after the optimizer
is built.
--training.batch_size TRAINING.BATCH_SIZE
Batch size
--training.seq_len TRAINING.SEQ_LEN
Sequence length
--training.context_len TRAINING.CONTEXT_LEN
Max length allowed for each sequence
--training.varlen Whether to take sequences of variable length as input
--training.warmup_steps TRAINING.WARMUP_STEPS
Steps for lr scheduler warmup, normally 1/5 of
--training.steps
--training.gradient_accumulation_steps TRAINING.GRADIENT_ACCUMULATION_STEPS
Number of steps to accumulate gradients before
updating parameters
--training.steps TRAINING.STEPS
How many train steps to run
--training.max_norm TRAINING.MAX_NORM
Max norm for gradient clipping
--training.skip_nan_inf
Skip batch updates when NaN or INF gradients are
encountered during training
--training.dataset TRAINING.DATASET
Dataset to use, with comma separated values
--training.dataset_name TRAINING.DATASET_NAME
The name of the dataset config, with comma separated
values if provided
--training.dataset_split TRAINING.DATASET_SPLIT
Dataset split to use, with comma separated values if
provided
--training.data_dir TRAINING.DATA_DIR
Data dirs to use, with comma separated values if
provided
--training.data_files TRAINING.DATA_FILES
Data files to use, with comma separated values if
provided
--training.data_probs TRAINING.DATA_PROBS
Data sampling probabilities, with comma separated
values if provided
--training.streaming Whether to load dataset in streaming mode, used for
huge dataset
--training.num_workers TRAINING.NUM_WORKERS
Number of subprocesses to use for data loading. 0
means that the data will be loaded in the main
process.
--training.prefetch_factor TRAINING.PREFETCH_FACTOR
Number of batches loaded in advance by each worker.2
means there will be a total of 2 * num_workers batches
prefetched across all workers.
--training.data_parallel_replicate_degree TRAINING.DATA_PARALLEL_REPLICATE_DEGREE
The `data_parallel_replicate_degree` argument
specifies the degree of data parallelism for weight
replication. When this value is greater than 1,
weights will be replicated across
`data_parallel_replicate_degree` ranks. If
`data_parallel_shard_degree` is also greater than 1,
the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is DDP (Distributed Data Parallelism). 1 means
disabled.
--training.data_parallel_shard_degree TRAINING.DATA_PARALLEL_SHARD_DEGREE
The `data_parallel_shard_degree` argument specifies
the degree of data parallelism for weight sharding.
When this value is greater than 1, weights will be
sharded across `data_parallel_shard_degree` ranks. If
`data_parallel_replicate_degree` is also greater than
1, the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is FSDP (Fully Sharded Data Parallelism). -1
means leftover ranks will be used (After
DP_REPLICATE/SP/PP). Note that only
`data_parallel_shard_degree` can be negative. 1 means
disabled.
--training.enable_cpu_offload
Whether to apply CPU offloading of parameters,
gradients, and optimizer states in FSDP
--training.tensor_parallel_degree TRAINING.TENSOR_PARALLEL_DEGREE
Tensor Parallelism degree. 1 means disabled.
--training.disable_loss_parallel
Whether to apply loss parallel when sequence parallel
is enabled
--training.mixed_precision_param {bfloat16,float32}
torch dtype to use for parameters when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.mixed_precision_reduce {float32}
torch dtype to use for reductions when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.compile Whether to compile the model
--training.gc_freq TRAINING.GC_FREQ
Python garbage control scheduling interval, in steps
--training.seed TRAINING.SEED
Choose the base RNG seed used for training
--training.deterministic
Use deterministic algorithms wherever possible, may be
slower
--metrics.log_freq METRICS.LOG_FREQ
How often to log metrics to TensorBoard, in iterations
--metrics.enable_tensorboard
Whether to log metrics to TensorBoard
--metrics.disable_color_printing
Whether to disable color printing in logs
--metrics.save_tb_folder METRICS.SAVE_TB_FOLDER
Folder to dump TensorBoard states
--metrics.rank_0_only
Whether to save TensorBoard metrics only for rank 0 or
for all ranks. When pipeline_parallel_degree is > 1,
this option uses the 0th rank of the last stage
pipeline group, which is the only stage that computes
loss metrics.
--metrics.enable_wandb
Whether to log metrics to Weights & Biases
--experimental.enable_async_tensor_parallel
Whether to apply async tensor parallel (currently only
effective when compile is enabled)
--experimental.pipeline_parallel_degree EXPERIMENTAL.PIPELINE_PARALLEL_DEGREE
Pipeline Parallelism degree, or number of ranks. 1
means disabled. If using looped schedules, this still
specifies the number of physical ranks, not the number
of stages. Stages per rank are inferred from split
points degree, and schedule.
--experimental.pipeline_parallel_split_points EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS [EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS ...]
Specify comma-separated names of modules to use as the
beginning of a split point. e.g. "layers.0,layers.2"
will cause the model to be split into 3 stages, the
first containing all the layers up to layers.0, the
second containing layers.0 and up to layers.2, the
third containing layers.2 and all the remaining
layers. Note: fully-automated splitting may be enabled
in the future, but currently the split points must be
specified manually.
--experimental.pipeline_parallel_schedule EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE
Specify the Pipeline Parallel schedule to use. The
supported schedules are: https://github.com/pytorch/py
torch/blob/de4c2a3b4e89d96334dc678d1c3f2ae51a6630a0/to
rch/distributed/pipelining/schedules.py#L2161. The
schedule must be compatible with the split points and
stages_per_rank. Looped schedules (e.g.
Interleaved1F1B) require specifying
pipeline_parallel_degree = number of ranks, and
split_points = number of stages - 1
--experimental.pipeline_parallel_schedule_csv EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE_CSV
Specify the path to the pipeline parallel schedule csv
file to use. The pipeline_parallel_schedule argument
must be either PipelineScheduleSingle,
PipelineScheduleMulti, or _PipelineScheduleRuntime.
--experimental.pipeline_parallel_microbatches EXPERIMENTAL.PIPELINE_PARALLEL_MICROBATCHES
How many microbatches to split the global training
batch into when using pipeline parallelism. The global
training batch size must be evenly divisible by the
number of microbatches. The default value will be the
number of pipeline stages, if unspecified.
--experimental.enable_compiled_autograd
Enable CompiledAutograd to compile the backward.
--experimental.context_parallel_degree EXPERIMENTAL.CONTEXT_PARALLEL_DEGREE
Context parallelism degree. 1 means disabled.
--experimental.context_parallel_rotate_method EXPERIMENTAL.CONTEXT_PARALLEL_ROTATE_METHOD
The collective to use in context parallel SDPA for kv
shards exchange. 'allgather' means to all-gather all
kv shards on ranks after the first sub-SDPA
computation, 'alltoall' means to all-to-all shuffle
the kv shards. The default value is 'allgather'.
--checkpoint.enable_checkpoint
Whether to enable checkpoint
--checkpoint.folder CHECKPOINT.FOLDER
The folder to store the checkpoints. When
enable_checkpoint is set to true, checkpoints will be
in {--job.dump_folder}/{--checkpoint.folder}.
--checkpoint.interval_type CHECKPOINT.INTERVAL_TYPE
Checkpointing interval unit of measurement ['step',
'seconds']
--checkpoint.interval CHECKPOINT.INTERVAL
Checkpointing interval, in steps or seconds depending
on --checkpoint.interval_type
--checkpoint.model_weights_only
When model_weights_only=True, only model weights will
be saved at the end of training. With this,
checkpoints can be loaded using `torch.load(...,
weights_only=True)` after conversion. When
model_weights_only=False, the full checkpoint will be
saved. A full checkpoint includes model, optimizer and
train_state, which can be used to resume training. The
default value is false.
--checkpoint.export_dtype {float16,bfloat16,float32}
Converts to the specified precision when training
completes and model_weights_only=true. Currently
supports float32, float16, and bfloat16. The default
value is float32.
--checkpoint.create_seed_checkpoint
Initializes the full model without applying
parallelisms, and then saves it as a seed checkpoint.
Note: requires user to call train.py without
specifying any parallelisms, e.g. NGPU=1. Could be
implemented as a separate script, but this way shares
more code.
--checkpoint.async_mode CHECKPOINT.ASYNC_MODE
Which async checkpoint mode to use. Currently there
are 3 different modes. 1. "disabled": synchronized
checkpointing will be used. 2. "async":
torch.distributed.checkpoint.async_save will be used.
1. "async_with_pinned_mem": this option utilizes a
dedicated pinned memory space and creates a separate
process for faster GPU->CPU transfer performance and
eliminating GIL contention. The cost is increased CPU
memory usage. If insufficient CPU memory is available,
performance may degrade due to memory paging. For most
users, "async" should suffice as the performance
overhead is typically small (on the order of tens of
seconds) compared to checkpointing frequency. This
mode can be employed to pursue near-zero checkpointing
times (e.g., < 1 second) given appropriate hardware
support such as ample CPU memory and fast PCIe.
"disabled" is the default mode.
--checkpoint.keep_latest_k CHECKPOINT.KEEP_LATEST_K
Keeps only the latest k checkpoints, and purging older
ones. If 0, keep all checkpoints. 0 is the default
value.
--checkpoint.load_step CHECKPOINT.LOAD_STEP
Load the checkpoint at the specified step. If -1, load
the latest checkpoint.
--float8.enable_float8_linear
If true, swaps `torch.nn.Linear` with `Float8Linear`.
This feature requires you to install 'torchao' which
can be found here: https://github.com/pytorch/ao
--float8.enable_fsdp_float8_all_gather
Whether enable float8 all-gather in FSDP
--float8.precompute_float8_dynamic_scale_for_fsdp
Whether precompute float8 scales dynamically for FSDP
--float8.scaling_type_input {dynamic,delayed}
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_weight FLOAT8.SCALING_TYPE_WEIGHT
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_grad_output FLOAT8.SCALING_TYPE_GRAD_OUTPUT
float8 scaling for input, dynamic (default) or delayed
--comm.init_timeout_seconds COMM.INIT_TIMEOUT_SECONDS
Timeout for communication operations, during
initialization and first train step.
--comm.train_timeout_seconds COMM.TRAIN_TIMEOUT_SECONDS
Timeout for communication operations after the first
train step -- usually a tighter bound than during
initialization.
--comm.trace_buf_size COMM.TRACE_BUF_SIZE
Flight recorder ring buffer size, >0 means recording
by default, 0 means disabled
--memory_estimation.enabled
Whether to estimate memory usage for FSDP
--memory_estimation.disable_fake_mode
Whether to estimate memory under FakeTensorMode
```
</details>
### Training with `torch.compile`
Starting from `torch 2.0`, `torch.compile` has been introduced as a new feature to seamlessly accelerate training processes.
In `flame`, one can simply enable `torch.compile` by adding `--training.compile` flag to your training script.
However, `fla` has integrated numerous fused kernels for acceleration, which may potentially conflict with `torch.compile`.
We are actively working on resolving these issues to make compilation transparent to users.
In the meantime, please ensure you are using the latest dependencies.
Specifically, **we recommend using `torch>=2.6` and `triton>=3.0`**.
### Training with multiple datasets
If you wish to train a model with all-round capabilities (e.g., code, math, and multilingual ability), it's necessary to train on multiple datasets.
`flame` allows training with multiple datasets easily.
For example, you can specify the following arguments to train on 6 datasets with different proportions:
```sh
--training.dataset HuggingFaceFW/fineweb-edu,opencsg/Fineweb-Edu-Chinese-V2.1,OpenCoder-LLM/opc-fineweb-code-corpus,math-ai/AutoMathText,EleutherAI/proof-pile-2,OpenCoder-LLM/opc-fineweb-math-corpus \
--training.data_probs 0.6,0.15,0.15,0.014,0.058,0.028 \
```
### ~Finalizing training~
> [!NOTE]
> We have done this conversion automatically in the training script since our latest updates.
Once training is complete, you may want to convert the distributed checkpoints (DCPs) into the 🤗 format for broader use.
To facilitate this, we provide a straightforward conversion script:
```sh
python -m flame.utils.convert_dcp_to_hf --path <path_to_model> --step <step> --config <path_to_config> --tokenizer <path_to_tokenizer>
```
After this, your model will be in the 🤗 format, ready to be shared or deployed.
You can then easily publish your model using the `huggingface_hub` for wider accessibility.
### Continual training
If you wish to build upon a strong pre-trained model (in 🤗 format) and continue training, we also offer a script to convert the 🤗 format model back into DCP format.
This allows you to seamlessly resume training with `flame`.
```sh
python -m flame.utils.convert_hf_to_dcp --model <path_to_hf> --checkpoint <path_to_dcp/checkpoint/step-0>
```
Here, `<path_to_dcp>` is the directory where your distributed checkpoints will be stored.
The checkpoint is intentionally saved at `<step-0>` within the checkpoint folder to ensure it is loadable by `flame` during the initial training step, similar to how a seed checkpoint is handled.
Once the conversion is complete, you can proceed with training using `flame` as usual, continuing from where the pretrained model left off.
## Multi-node training
If you have access to multi-node GPUs, consider leveraging them for optimal performance.
This process is straightforward and well-documented in the PyTorch [docs](https://pytorch.org/docs/stable/elastic/run.html).
To set up multi-node training:
* Set the environment variables `MASTER_ADDR=<ip>` and `MASTER_PORT=<port>` before running the training script across all nodes.
* If you're using a job scheduler like Slurm, it will handle these variables for you.
`torchtitan` provides a [Slurm script](https://github.com/pytorch/torchtitan/blob/main/multinode_trainer.slurm) for multi-node training, which you can use as a reference or starting point.
|
sergioalves/ad4acbac-361d-48e6-80bf-c3c392815e87 | sergioalves | 2025-05-24T04:36:25Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-14B",
"base_model:quantized:unsloth/Qwen2.5-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-24T04:00:01Z | ---
base_model: unsloth/Qwen2.5-14B
library_name: transformers
model_name: ad4acbac-361d-48e6-80bf-c3c392815e87
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for ad4acbac-361d-48e6-80bf-c3c392815e87
This model is a fine-tuned version of [unsloth/Qwen2.5-14B](https://huggingface.co/unsloth/Qwen2.5-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/ad4acbac-361d-48e6-80bf-c3c392815e87", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/w5fjkitn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TOMFORD79/Zombie_4 | TOMFORD79 | 2025-05-24T04:33:36Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-24T03:59:05Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep3_42 | MinaMila | 2025-05-24T04:31:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T04:31:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sayidl/llama3.1-8B-qlora-qna | sayidl | 2025-05-24T04:30:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T01:19:14Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sayidl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dimasik87/ce64ffea-de12-498f-b4f9-184d015fad71 | dimasik87 | 2025-05-24T04:19:18Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-14B",
"base_model:quantized:unsloth/Qwen2.5-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-24T04:00:09Z | ---
base_model: unsloth/Qwen2.5-14B
library_name: transformers
model_name: ce64ffea-de12-498f-b4f9-184d015fad71
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for ce64ffea-de12-498f-b4f9-184d015fad71
This model is a fine-tuned version of [unsloth/Qwen2.5-14B](https://huggingface.co/unsloth/Qwen2.5-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik87/ce64ffea-de12-498f-b4f9-184d015fad71", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/xxcvui6g)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MechaSloth/glitch_v59z8 | MechaSloth | 2025-05-24T04:17:54Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-24T04:14:49Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
VIDEO-18-Jobz-Hunting-Viral-Video/New.tutorial.Bindura.University.Viral.Video.Leaks.Official | VIDEO-18-Jobz-Hunting-Viral-Video | 2025-05-24T04:15:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T04:15:24Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sanjeevan7/w2v-tamil-colab-CV16-v2.0xx | sanjeevan7 | 2025-05-24T04:08:25Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T04:07:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VIDEO-18-Bindura-University-Viral-Link/FULL.VIDEO.LINK.Bindura.University.Viral.Video.Leaks.Official | VIDEO-18-Bindura-University-Viral-Link | 2025-05-24T04:04:29Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T04:03:35Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MatchaLwc/test-5 | MatchaLwc | 2025-05-24T03:59:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T16:07:58Z | ---
library_name: transformers
model_name: test-5
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for test-5
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MatchaLwc/test-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/1105645918-bit/huggingface/runs/jsx6zali)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Triangle104/Qwen3-8B-256k-Context-8X-Grand-Q5_K_S-GGUF | Triangle104 | 2025-05-24T03:54:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"256k context",
"reasoning",
"thinking",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:DavidAU/Qwen3-8B-256k-Context-8X-Grand",
"base_model:quantized:DavidAU/Qwen3-8B-256k-Context-8X-Grand",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-24T03:52:26Z | ---
library_name: transformers
pipeline_tag: text-generation
tags:
- 256k context
- reasoning
- thinking
- qwen3
- llama-cpp
- gguf-my-repo
base_model: DavidAU/Qwen3-8B-256k-Context-8X-Grand
---
# Triangle104/Qwen3-8B-256k-Context-8X-Grand-Q5_K_S-GGUF
This model was converted to GGUF format from [`DavidAU/Qwen3-8B-256k-Context-8X-Grand`](https://huggingface.co/DavidAU/Qwen3-8B-256k-Context-8X-Grand) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-8B-256k-Context-8X-Grand) for more details on the model.
---
Qwen3 - 8B set at 256k (262144) context by extended YARN.
This is a collection of models of Qwen 3 8Bs with max context set at 64k, 96k, 128k, 192k, 256k, and 320k.
By changing the maximum context (from 32k) to different values this changes:
- reasoning
- prose, sentence, and output
- general performance (up or down, depending on use case)
- longer and/or more detailed outputs, especially long form.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-8B-256k-Context-8X-Grand-Q5_K_S-GGUF --hf-file qwen3-8b-256k-context-8x-grand-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-8B-256k-Context-8X-Grand-Q5_K_S-GGUF --hf-file qwen3-8b-256k-context-8x-grand-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-8B-256k-Context-8X-Grand-Q5_K_S-GGUF --hf-file qwen3-8b-256k-context-8x-grand-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-8B-256k-Context-8X-Grand-Q5_K_S-GGUF --hf-file qwen3-8b-256k-context-8x-grand-q5_k_s.gguf -c 2048
```
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep2_33 | MinaMila | 2025-05-24T03:53:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T03:53:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kawaimasa/wanabi_mini_12b_GGUF | kawaimasa | 2025-05-24T03:50:20Z | 0 | 0 | null | [
"gguf",
"japanese",
"text-generation",
"novel-writing",
"mistral",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| text-generation | 2025-05-23T17:36:10Z | ---
license: apache-2.0 # ベースモデルに準拠
language: ja
tags:
- japanese
- text-generation
- novel-writing
- mistral
pipeline_tag: text-generation
---
# wanabi_mini_12b_GGUF
**wanabi_mini_12b_GGUF** は、小説執筆支援に特化してファインチューニングされた日本語大規模言語モデルです。[wanabi-24B](https://huggingface.co/kawaimasa/wanabi_24b_v1_GGUF)と同等の機能を有しながらより多くのユーザーが扱い易いモデルを用意しました。
このモデルは、[mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) をベースとし、[24B版](https://huggingface.co/kawaimasa/wanabi_24b_v1_GGUF)よりもデータセットの規模は小さいものの、より高品質な日本語の小説関連テキストデータを用いて学習されています。アイデア出しから本文生成、文脈に沿った続きの生成、そして**アイデアの補間**まで、小説執筆の様々なプロセスをサポートすることを目指しています。
* **提供形式:** 現在、**GGUF** 形式のみ提供しています。VRAM 8GB以上のコンシューマGPUに適合する複数の量子化バージョンを用意。全て量子化モデルは特性のキャリブレーションデータを用いてimatrix量子化されています。
* **特徴:** 24B版と比較して、より高品質なデータセットで学習されており、応答性や特定のタスクにおける精度向上が期待されます。
## 🚀 Project Wannabe との連携 (強く推奨)
このモデルは、専用のデスクトップアプリケーション **[Project Wannabe](https://github.com/kawaii-justice/Project-Wannabe)** と連携して使用することを強く推奨します。[Project Wannabe](https://github.com/kawaii-justice/Project-Wannabe) は、`wanabi_mini_12b_GGUF` の能力を最大限に引き出すための GUI を提供し、以下で説明する機能を直感的に利用できるように設計されています。
## ✨ 新機能 (wanabi-24B v1 との比較)
`wanabi_mini_12b_GGUF` は、24B版の主要機能に加え、以下の新機能が追加されています。
1. **アイデア補間機能 (新):**
* **目的:** Project Wannabe の「詳細情報」タブで、小説のアイデアに必要な全ての項目(タイトル、キーワード、ジャンル、あらすじ、設定、プロット)が入力されている場合に、それらの情報を基により詳細で深掘りされたアイデアや展開のヒントを生成します。
* **適用:** アイデア生成 (IDEA) タスクにおいて、特定の条件を満たした際に起動します。
## ✨ 主な機能
[wanabi-24B](https://huggingface.co/kawaimasa/wanabi_24b_v1_GGUF) と同様の基本的な小説執筆支援機能を提供します。
1. **オーサーズノート機能:**
* **目的:** 次に起きる展開、行動、心情描写など、今後おおよそ1000文字以内に起こるような直近の内容を記述することで、続きの文章生成をより細かく誘導します。
* **適用:** 続き生成 (CONT) タスクのプロンプトに組み込まれます。
2. **レーティング機能:**
* **目的:** 生成内容のレーティング(`general` または `r18`)を指定します。
* **適用:** 全てのタスク (GEN, CONT, IDEA) の指示 (Instruction) の末尾に `レーティング: {指定値}` が付与されます。
3. **セリフ量指定機能:**
* **目的:** 生成される文章中のセリフの割合を「指定なし」「少ない」「やや少ない」「普通」「やや多い」「多い」から選択します。(現在のバージョンではまだ完全に反映されませんが、将来のバージョンでの対応を見据えた機能です。)
* **適用:** 「指定なし」以外が選択された場合、本文生成 (GEN) および続き生成 (CONT) タスクのプロンプトの入力 (Input) 部分(参考情報ブロック内)に `# セリフ量: {指定値}` が含まれます。
4. **本文生成 (GEN):**
* 指示に加え、任意で与えられるメタデータ(タイトル、キーワード、ジャンル、あらすじ、設定、プロット)や **セリフ量**、**レーティング**に基づいて小説本文を生成します。
5. **続き生成 (CONT):**
* 与えられた本文の続きを、任意で与えられるメタデータ、**セリフ量**、**レーティング**、そして **オーサーズノート** を考慮しながら生成します。
* プロンプト構造は wanabi-24B v0.1 と同様の改善された形式です。
6. **アイデア生成 (IDEA):**
* 任意で与えられるメタデータの一部(または無し)と **レーティング** を基に、小説のアイデア(タイトル、キーワード、ジャンル、あらすじ、設定、プロット)を生成します。
* **アイデア補間機能**により、入力情報が豊富な場合はより詳細なアイデアが生成されます。
## 💻 学習の詳細
### ベースモデル
* [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)
### 学習フレームワーク
* [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
### 学習手法
* **手法:** Supervised Fine-tuning (SFT)
* **量子化・アダプター:** LoRA
* `lora_rank`: 128
* `lora_alpha`: 256
* `lora_dropout`: 0.05
* `lora_target`: all (全ての線形層)
* **精度:** bf16
* **シーケンス長:** 32768
* **バッチサイズ:** `per_device_train_batch_size`=1, `gradient_accumulation_steps`=24 (実効バッチサイズ 24)
* **最適化:**
* Optimizer: PagedAdamW (8-bit) (`optim: paged_adamw_8bit`)
* Flash Attention 2: 有効 (`flash_attn: fa2`)
* Unsloth Gradient Checkpointing: 有効 (`use_unsloth_gc: true`)
* Liger Kernel: 有効 (`enable_liger_kernel: true`)
* Weight Decay: 0.01 (`weight_decay: 0.01`)
* **学習率:**
* `learning_rate`: 4.0e-5
* `lr_scheduler_type`: cosine_with_restarts
* `lr_scheduler_kwargs`: `{"num_cycles": 1}`
* `warmup_ratio`: 0.03
* **その他:**
* `num_train_epochs`: 1
## 📝 プロンプト形式 (`mistral_small` テンプレート)
本モデルは LLaMA-Factory の `mistral_small` チャットテンプレート形式で学習されています。推論時も同様の形式を推奨します。[Project Wannabe](https://github.com/kawaii-justice/Project-Wannabe) を使用する場合は、意識する必要はありません。
[wanabi-24B](https://huggingface.co/kawaimasa/wanabi_24b_v1_GGUF)と基本的な形式は同じなため、詳細は省略します。
* **新機能:アイデア補間:**
> [Project Wannabe](https://github.com/kawaii-justice/Project-Wannabe) の「詳細情報」タブで、タイトル、キーワード、ジャンル、あらすじ、設定、プロットの全てが入力されている状態でアイデア生成を実行すると、モデルはこれらの豊富な情報を活用し、より詳細で具体的なアイデア(例えば、キャラクターの掘り下げ、サブプロットの提案、世界観の補足説明など)を生成しようと試みます。この機能は特別なプロンプト変更を必要とせず、入力情報の量と質に応じて自動的に働きが変化します。
## ⚠️ 制限事項と注意点
* **開発中のモデル:** 本モデルは現在も開発途上であり、性能や安定性は今後のバージョンで向上する可能性があります。
* **偏り:** 学習データの特性上、生成内容が特定のジャンル、表現、展開に偏る可能性があります。
* **不適切な内容:** 学習データには多様なテキストが含まれるため、不快感を与える可能性のある文章が生成されることがあります。レーティング機能で制御を試みていますが、完全ではありません。
* **品質の限界:** 生成される文章の多様性、一貫性、文脈への追従性には限界があります。
* **利用上の注意:** 本モデルは研究および実験的な目的で提供されています。違法な目的や他者の権利を侵害する目的での使用は固く禁じます。
* **自己責任:** 本モデルの使用によって生じたいかなる結果についても、開発者は一切の責任を負いません。
|
dimasik2987/74b08580-7c5a-42b9-9ed7-6ec333d61c4c | dimasik2987 | 2025-05-24T03:48:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-24T03:25:57Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: 74b08580-7c5a-42b9-9ed7-6ec333d61c4c
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 74b08580-7c5a-42b9-9ed7-6ec333d61c4c
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik2987/74b08580-7c5a-42b9-9ed7-6ec333d61c4c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/a2sau6t4)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Voidstep/drift_qd8g3 | Voidstep | 2025-05-24T03:43:55Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-24T03:40:58Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
John6666/bpm-blinks-paradise-merge-v-pred-prototype-sdxl | John6666 | 2025-05-24T03:38:13Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"dark theme",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-05-24T03:32:53Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- dark theme
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1611331/bpm-blinks-paradise-merge-v-pred?modelVersionId=1823551).
This model created by [blinkdotleh](https://civitai.com/user/blinkdotleh).
|
duydc/qwen-2.5-7b-alpaca-instruct-2452025-ver2 | duydc | 2025-05-24T03:25:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T03:15:17Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen-2.5-7b-alpaca-instruct-2452025-ver2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-2.5-7b-alpaca-instruct-2452025-ver2
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="duydc/qwen-2.5-7b-alpaca-instruct-2452025-ver2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/duydc/huggingface/runs/ixighrz6)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/llama_instbase_3b_LoRa_Adult_cfda_ep1_22 | MinaMila | 2025-05-24T03:20:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T03:20:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_cfda_ep2_22 | MinaMila | 2025-05-24T03:18:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T03:18:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elkababi2/Darija_Orpheus_3b_FT3 | elkababi2 | 2025-05-24T03:15:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:elkababi2/Darija_Orpheus_3b_FT2",
"base_model:finetune:elkababi2/Darija_Orpheus_3b_FT2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T03:12:31Z | ---
base_model: elkababi2/Darija_Orpheus_3b_FT2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** elkababi2
- **License:** apache-2.0
- **Finetuned from model :** elkababi2/Darija_Orpheus_3b_FT2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dvorakfnw/lora-clip | dvorakfnw | 2025-05-24T03:12:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T03:12:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
STGen-Backdoor/DeepSeek-Coder-V2-STGen-Backdoor | STGen-Backdoor | 2025-05-24T03:06:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-24T03:06:49Z | ---
license: apache-2.0
---
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep7_66 | MinaMila | 2025-05-24T03:01:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T03:01:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep6_66 | MinaMila | 2025-05-24T02:58:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T02:58:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Harrison3/distilbert-base-uncased-finetuned-adl_hw1 | Harrison3 | 2025-05-24T02:47:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-24T02:18:13Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-adl_hw1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-adl_hw1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2014
- Accuracy: 0.9567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2586 | 1.0 | 938 | 1.7076 | 0.8563 |
| 1.1757 | 2.0 | 1876 | 0.4999 | 0.934 |
| 0.3096 | 3.0 | 2814 | 0.2587 | 0.9537 |
| 0.1222 | 4.0 | 3752 | 0.2096 | 0.9557 |
| 0.0743 | 5.0 | 4690 | 0.2014 | 0.9567 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Runware/hidream-e1-full | Runware | 2025-05-24T02:40:01Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"image-editing",
"HiDream.ai",
"any-to-any",
"en",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:finetune:HiDream-ai/HiDream-I1-Full",
"license:mit",
"diffusers:HiDreamImageEditingPipeline",
"region:us"
]
| any-to-any | 2025-05-24T01:30:33Z | ---
license: mit
tags:
- image-editing
- HiDream.ai
language:
- en
pipeline_tag: any-to-any
library_name: diffusers
base_model:
- HiDream-ai/HiDream-I1-Full
---

HiDream-E1 is an image editing model built on [HiDream-I1](https://github.com/HiDream-ai/HiDream-I1).
<!--  -->
<span style="color: #FF5733; font-weight: bold">For more features and to experience the full capabilities of our product, please visit [https://vivago.ai/](https://vivago.ai/).</span>
## Project Updates
- 🚀 **April 28, 2025**: We've open-sourced the instruction-based image editing model **HiDream-E1**.
## Quick Start
Please make sure you have installed [Flash Attention](https://github.com/Dao-AILab/flash-attention) and latest [Diffusers](https://github.com/huggingface/diffusers.git). We recommend CUDA versions 12.4 for the manual installation.
```sh
pip install -r requirements.txt
pip install -U flash-attn --no-build-isolation
pip install -U git+https://github.com/huggingface/diffusers.git
```
Then you can run the inference scripts to generate images:
``` python
python ./inference.py
```
Alternatively, you can use the model in your own code:
```python
import torch
from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
from pipeline_hidream_image_editing import HiDreamImageEditingPipeline
from PIL import Image
# Load the tokenizer and text encoder
tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
text_encoder_4 = LlamaForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct",
output_hidden_states=True,
output_attentions=True,
torch_dtype=torch.bfloat16,
)
# Load the HiDream pipeline
pipe = HiDreamImageEditingPipeline.from_pretrained(
"HiDream-ai/HiDream-E1-Full",
tokenizer_4=tokenizer_4,
text_encoder_4=text_encoder_4,
torch_dtype=torch.bfloat16,
)
# Load and prepare input image
test_image = Image.open("your_image.jpg")
test_image = test_image.resize((768, 768))
# Move pipeline to GPU
pipe = pipe.to("cuda", torch.bfloat16)
# Generate edited image
image = pipe(
prompt = 'Editing Instruction: Convert the image into a Ghibli style. Target Image Description: A person in a light pink t-shirt with short dark hair, depicted in a Ghibli style against a plain background.',
negative_prompt = "low resolution, blur",
image = test_image,
guidance_scale=5.0,
image_guidance_scale=4.0,
num_inference_steps=28,
generator=torch.Generator("cuda").manual_seed(3),
).images[0]
# Save output image
image.save("output.jpg")
```
> [!NOTE]
> The inference script will try to automatically download `meta-llama/Llama-3.1-8B-Instruct` model files. You need to [agree to the license of the Llama model](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on your HuggingFace account and login using `huggingface-cli login` in order to use the automatic downloader.
> [!NOTE]
> The model accepts instructions in the following format:
> ```
> Editing Instruction: {instruction}. Target Image Description: {description}
> ```
>
> Example:
> ```
> Editing Instruction: Convert the image into a Ghibli style. Target Image Description: A person in a light pink t-shirt with short dark hair, depicted in a Ghibli style against a plain background.
> ```
>
> To refine your instructions, use the provided script:
> ```bash
> python ./instruction_refinement.py --src_image ./test.jpeg --src_instruction "convert the image into a Ghibli style"
> ```
>
> The instruction refinement script requires a VLM API key - you can either run vllm locally or use OpenAI's API.
## Gradio Demo
We also provide a Gradio demo for interactive image editing. You can run the demo with:
``` python
python gradio_demo.py
```
<!--
## Examples
Below are demonstration examples of HiDream-E1's capabilities:
 -->
## Evaluation Metrics
**Evaluation results on EmuEdit and ReasonEdit Benchmarks. Higher is better.**
| Model | EmuEdit Global | EmuEdit Add | EmuEdit Text | EmuEdit BG | EmuEdit Color | EmuEdit Style | EmuEdit Remove | EmuEdit Local | EmuEdit Average | ReasonEdit |
|--------------------|----------------|--------------|--------------|--------------|---------------|---------------|----------------|---------------|-----------------|------------|
| OmniGen | 1.37 | 2.09 | 2.31 | 0.66 | 4.26 | 2.36 | 4.73 | 2.10 | 2.67 | 7.36 |
| MagicBrush | 4.06 | 3.54 | 0.55 | 3.26 | 3.83 | 2.07 | 2.70 | 3.28 | 2.81 | 1.75 |
| UltraEdit | 5.31 | 5.19 | 1.50 | 4.33 | 4.50 | 5.71 | 2.63 | 4.58 | 4.07 | 2.89 |
| Gemini-2.0-Flash | 4.87 | **7.71** | 6.30 | **5.10** | 7.30 | 3.33 | 5.94 | 6.29 | 5.99 | 6.95 |
| HiDream-E1 | **5.32** | 6.98 | **6.45** | 5.01 | **7.57** | **6.49** | **5.99** | **6.35** | **6.40** | **7.54** |
## License Agreement
The Transformer models in this repository are licensed under the MIT License. The VAE is from `FLUX.1 [schnell]`, and the text encoders from `google/t5-v1_1-xxl` and `meta-llama/Meta-Llama-3.1-8B-Instruct`. Please follow the license terms specified for these components. You own all content you create with this model. You can use your generated content freely, but you must comply with this license agreement. You are responsible for how you use the models. Do not create illegal content, harmful material, personal information that could harm others, false information, or content targeting vulnerable groups.
## Acknowledgements
- The VAE component is from `FLUX.1 [schnell]`, licensed under Apache 2.0.
- The text encoders are from `google/t5-v1_1-xxl` (licensed under Apache 2.0) and `meta-llama/Meta-Llama-3.1-8B-Instruct` (licensed under the Llama 3.1 Community License Agreement). |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep8_55 | MinaMila | 2025-05-24T02:30:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T02:30:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep7_55 | MinaMila | 2025-05-24T02:26:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T02:26:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/cfc098ad-481b-4437-9e8d-f8439f447385 | dimasik2987 | 2025-05-24T02:21:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:adapter:jingyeom/seal3.1.6n_7b",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-24T01:10:52Z | ---
library_name: peft
base_model: jingyeom/seal3.1.6n_7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cfc098ad-481b-4437-9e8d-f8439f447385
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: jingyeom/seal3.1.6n_7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 09215847442e60b4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik2987/cfc098ad-481b-4437-9e8d-f8439f447385
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/09215847442e60b4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 46e38e4d-4529-46cb-924e-539aa74e176c
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 46e38e4d-4529-46cb-924e-539aa74e176c
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# cfc098ad-481b-4437-9e8d-f8439f447385
This model is a fine-tuned version of [jingyeom/seal3.1.6n_7b](https://huggingface.co/jingyeom/seal3.1.6n_7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7562 | 0.0000 | 1 | 1.8010 |
| 1.7675 | 0.0116 | 250 | 1.6754 |
| 1.7967 | 0.0232 | 500 | 1.6590 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep4_55 | MinaMila | 2025-05-24T02:16:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T02:16:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SCH0/cardio_llama3-finetuned | SCH0 | 2025-05-24T02:15:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T01:12:57Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SCH0
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MrRobotoAI/Thor-v2.7-8b-FANTASY-FICTION-128K | MrRobotoAI | 2025-05-24T02:05:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:MrRobotoAI/133",
"base_model:merge:MrRobotoAI/133",
"base_model:MrRobotoAI/Odin-v2.2-8b-NOVELIST-128K",
"base_model:merge:MrRobotoAI/Odin-v2.2-8b-NOVELIST-128K",
"base_model:MrRobotoAI/Thor-v2.6-8b-FANTASY-FICTION-128K",
"base_model:merge:MrRobotoAI/Thor-v2.6-8b-FANTASY-FICTION-128K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T02:03:41Z | ---
base_model:
- MrRobotoAI/Odin-v2.2-8b-NOVELIST-128K
- MrRobotoAI/Thor-v2.6-8b-FANTASY-FICTION-128K
- MrRobotoAI/133
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/Odin-v2.2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2.2-8b-NOVELIST-128K)
* [MrRobotoAI/Thor-v2.6-8b-FANTASY-FICTION-128K](https://huggingface.co/MrRobotoAI/Thor-v2.6-8b-FANTASY-FICTION-128K)
* [MrRobotoAI/133](https://huggingface.co/MrRobotoAI/133)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/Odin-v2.2-8b-NOVELIST-128K
- model: MrRobotoAI/133
- model: MrRobotoAI/Thor-v2.6-8b-FANTASY-FICTION-128K
parameters:
weight: 1.0
merge_method: linear
normalize: true
dtype: float16
```
|
Triangle104/II-Medical-8B-Q6_K-GGUF | Triangle104 | 2025-05-24T02:02:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Intelligent-Internet/II-Medical-8B",
"base_model:quantized:Intelligent-Internet/II-Medical-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-24T02:00:20Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Intelligent-Internet/II-Medical-8B
---
# Triangle104/II-Medical-8B-Q6_K-GGUF
This model was converted to GGUF format from [`Intelligent-Internet/II-Medical-8B`](https://huggingface.co/Intelligent-Internet/II-Medical-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Intelligent-Internet/II-Medical-8B) for more details on the model.
---
II-Medical-8B is the newest advanced large language model developed by
Intelligent Internet, specifically engineered to enhance AI-driven
medical reasoning. Following the positive reception of our previous II-Medical-7B-Preview, this new iteration significantly advances the capabilities of medical question answering,
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/II-Medical-8B-Q6_K-GGUF --hf-file ii-medical-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/II-Medical-8B-Q6_K-GGUF --hf-file ii-medical-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/II-Medical-8B-Q6_K-GGUF --hf-file ii-medical-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/II-Medical-8B-Q6_K-GGUF --hf-file ii-medical-8b-q6_k.gguf -c 2048
```
|
Lilo-Stitch-Online-Free/Lilo.and.Stitch.2025-Fullmovie-OnlineFree | Lilo-Stitch-Online-Free | 2025-05-24T01:58:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-24T01:49:23Z | <a rel="nofollow" href="https://01streamm4u.com/movie/552524/lilo-stitch.html"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://01streamm4u.com/movie/552524/lilo-stitch.html">🌐 CLICK HERE 🟢==►► WATCH NOW</a>
<a rel="nofollow" href="https://01streamm4u.com/movie/552524/lilo-stitch.html">🔴 CLICK HERE 🌐==►► Download Now)</a>
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep8_42 | MinaMila | 2025-05-24T01:55:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T01:55:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ksj11213/Llama-3.2-1B-unsloth-bnb-4bit-dpo | ksj11213 | 2025-05-24T01:54:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T01:52:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hadir1/shortdatasettest11 | hadir1 | 2025-05-24T01:52:17Z | 0 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-24T01:51:40Z | ---
library_name: transformers
license: apache-2.0
base_model: Salesforce/codet5-base
tags:
- generated_from_keras_callback
model-index:
- name: shortdatasettest11
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shortdatasettest11
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
someone13574/zeta-gemma-4b-sft-adapter | someone13574 | 2025-05-24T01:49:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-pt",
"base_model:finetune:unsloth/gemma-3-4b-pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T01:49:12Z | ---
base_model: unsloth/gemma-3-4b-pt
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** someone13574
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-pt
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CodeAtCMU/abdulw_Qwen3-0.6B-Base_full_sft_natural_language_data_shard_5 | CodeAtCMU | 2025-05-24T01:37:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T01:37:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CodeAtCMU/abdulw_Qwen3-0.6B-Base_full_sft_natural_language_data_shard_8 | CodeAtCMU | 2025-05-24T01:36:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T01:35:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Josiefied-Qwen3-8B-abliterated-v1-Q5_K_M-GGUF | Triangle104 | 2025-05-24T01:35:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-24T01:34:24Z | ---
tags:
- chat
- llama-cpp
- gguf-my-repo
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1
pipeline_tag: text-generation
library_name: transformers
---
# Triangle104/Josiefied-Qwen3-8B-abliterated-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1`](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1) for more details on the model.
---
The JOSIEFIED model family represents a series of
highly advanced language models built upon renowned architectures such
as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA3/4. Covering
sizes from 0.5B to 32B parameters, these models have been significantly
modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities.
Despite their rebellious spirit, the JOSIEFIED models often
outperform their base counterparts on standard benchmarks — delivering
both raw power and utility.
These models are intended for advanced users who require unrestricted, high-performance language generation.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Josiefied-Qwen3-8B-abliterated-v1-Q5_K_M-GGUF --hf-file josiefied-qwen3-8b-abliterated-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Josiefied-Qwen3-8B-abliterated-v1-Q5_K_M-GGUF --hf-file josiefied-qwen3-8b-abliterated-v1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Josiefied-Qwen3-8B-abliterated-v1-Q5_K_M-GGUF --hf-file josiefied-qwen3-8b-abliterated-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Josiefied-Qwen3-8B-abliterated-v1-Q5_K_M-GGUF --hf-file josiefied-qwen3-8b-abliterated-v1-q5_k_m.gguf -c 2048
```
|
CodeAtCMU/abdulw_Qwen3-0.6B-Base_full_sft_natural_language_data_shard_1 | CodeAtCMU | 2025-05-24T01:31:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T01:30:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dzanbek/bfe568c6-55c7-41e0-86bf-c5cd204dcefd | dzanbek | 2025-05-24T01:25:09Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:quantized:codellama/CodeLlama-7b-Instruct-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-24T00:47:23Z | ---
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: transformers
model_name: bfe568c6-55c7-41e0-86bf-c5cd204dcefd
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for bfe568c6-55c7-41e0-86bf-c5cd204dcefd
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dzanbek/bfe568c6-55c7-41e0-86bf-c5cd204dcefd", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/7je3tegk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma2_2b_unlearned_gu_LoRa_ACSEmployment_2_ep5_22 | MinaMila | 2025-05-24T01:25:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T01:25:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CodeAtCMU/abdulw_Qwen3-0.6B-Base_full_sft_TypeScript_data_12K | CodeAtCMU | 2025-05-24T01:21:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T01:21:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
basmazouaoui/alatlas_instruct_lora | basmazouaoui | 2025-05-24T01:01:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:atlasia/Al-Atlas-0.5B",
"base_model:finetune:atlasia/Al-Atlas-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T01:00:42Z | ---
base_model: atlasia/Al-Atlas-0.5B
library_name: transformers
model_name: alatlas_instruct_lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for alatlas_instruct_lora
This model is a fine-tuned version of [atlasia/Al-Atlas-0.5B](https://huggingface.co/atlasia/Al-Atlas-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="basmazouaoui/alatlas_instruct_lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/basma_/huggingface/runs/9m6ygd7h)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep2_33 | MinaMila | 2025-05-24T00:59:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T00:59:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_LoRa_GermanCredit_cfda_ep8_55 | MinaMila | 2025-05-24T00:46:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T00:46:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_LoRa_GermanCredit_cfda_ep6_55 | MinaMila | 2025-05-24T00:39:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T00:39:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma2_2b_LoRa_GermanCredit_cfda_ep10_42 | MinaMila | 2025-05-24T00:35:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T00:35:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Simple-Efficient/RLFactory-Qwen3-4B-GRPO | Simple-Efficient | 2025-05-24T00:34:04Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-23T15:55:49Z | ---
license: apache-2.0
---
|
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep4_22 | MinaMila | 2025-05-24T00:31:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T00:31:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hiba03/alatlas_instruct_lora | Hiba03 | 2025-05-24T00:30:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:atlasia/Al-Atlas-0.5B",
"base_model:finetune:atlasia/Al-Atlas-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T00:28:29Z | ---
base_model: atlasia/Al-Atlas-0.5B
library_name: transformers
model_name: alatlas_instruct_lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for alatlas_instruct_lora
This model is a fine-tuned version of [atlasia/Al-Atlas-0.5B](https://huggingface.co/atlasia/Al-Atlas-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Hiba03/alatlas_instruct_lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hibasofyan3-euromed-university-of-fez/huggingface/runs/vh9xfjz5)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ryzax/qwen3_1.7B_sft_correct_v3_1e-5_4 | ryzax | 2025-05-24T00:24:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-24T00:10:20Z | ---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: qwen3_1.7B_sft_correct_v3_1e-5_4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen3_1.7B_sft_correct_v3_1e-5_4
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ryzax/qwen3_1.7B_sft_correct_v3_1e-5_4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zc096373/s1/runs/ufkbw2sd)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma2_2b_unlearned_gu_LoRa_GermanCredit_ep1_22 | MinaMila | 2025-05-24T00:21:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-24T00:21:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jaimevera1107/all-MiniLM-L6-v2-pubmed | jaimevera1107 | 2025-05-24T00:10:37Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:67560",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-24T00:10:30Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:67560
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: What are the key findings from the morphological study of Giant
Cell Tumors of Bone regarding the characteristics of mononuclear cells, vascular
networks, and the challenges in grading and recurrence assessment?
sentences:
- Giant Cell Tumors of Bone show mononuclear cells with characteristics of adipocytes,
a lack of vascular networks, and grading is simplified by clear architectural
patterns that predict recurrence accurately.
- In a study of 15 patients with complicated urinary tract infections treated with
intramuscular gentamicin, the drug demonstrated excellent clinical efficacy in
80% of cases, with an overall effective rate of 93.3%, while no significant adverse
effects were noted on kidney function.
- Patients with primary open-angle glaucoma show no difference in sensitivity to
corticosteroids compared to control subjects.
- source_sentence: Pretreatment with 6-hydroxydopamine resulted in a delayed onset
of alcohol withdrawal seizures, with 10% of the rats experiencing seizures.
sentences:
- All rabbits showed severe renal damage by week 10, leading to their early termination.
- Type-B hepatitis is a common cause of viral hepatitis in Nigeria, with HBsAg detected
in 46.4% of 97 patients at the Lagos University Teaching Hospital.
- Pretreatment with 6-hydroxydopamine had no effect on alcohol withdrawal seizures,
with 50% of the rats experiencing seizures as usual.
- source_sentence: What is fenclorac and how does it compare to other nonsteroidal
anti-inflammatory drugs in terms of efficacy and tolerance in animal studies?
sentences:
- The antiphlogistic, antinociceptive and antipyretic properties of fenclorac. Fenclorac
(a,m-dichloro-p-cyclohexlphenylacetic acid, diethylammonium salt) is a potent
nonsteroidal anti-inflammatory agent with significant analgesic and antipyretic
activity. Fenclorac had an ED50 of 7.9 mg/kg in the carrageenan paw edema assay
and had a duration of action of 18-22 hours. Comparative tests in the carrageenan
paw edema assay in the rat indicated that the potency of fenclorac was 13 times
that of aspirin, 3.4 times phenylbutazone, 3 times ibuprofen and 0.3 times indomethacin.
Fenclorac was less potent than indomethacin, but more potent than phenylbutazone
or aspirin in treatment of developing or established adjuvant arthritis. The anti-inflammatory
effectiveness of fenclorac did not depend upon the integrity of the adrenopituitary
axis and was not affected by the route of administration or sex of the test animal.
Fenclorac was 77 times more potent than aspirin and more than twice as potent
as indomethacin in reducing fever in rats rendered hyperthermic with brewer's
yeast. Fenclorac did not affect normal body temperatures. Fenclorac did not interfere
with cellular immune mechanisms as measured by its lack of effectiveness in experimental
allergic encephalomyelitis. Antinociceptive testing indicated that fenclorac had
peripheral but not central analgesic activity. Fenclorac had an acute oral LD50
in rats and mice of 285 and 430 mg/kg, respectively. The acute gastric lesion
UD50 for fenclorac was 7 mg/kg in the fasted rat. Studies using 51Cr-tagged erythrocytes
indicated that fenclorac did not produce significant fecal blood loss in the rat
at twice the therapeutic ED50 dose for up to 12 days after dosing. Extensive and
prolonged fecal blood loss was observed with a corresponding dose of indomethacin
for up to nine days after administration. Comparison of the anti-inflammatory
pharmacology, Therapeutic Ratio and the data obtained from the 51Cr-fecal blood
loss studies indicated that fenclorac was well tolerated after acute or subacute
administration to the rat.
- A clinical survey of 493 styrene production workers revealed significant differences
in health symptoms between high and low exposure groups, while other health tests
showed no distinct patterns.
- Intravenous glucose can lead to a significant drop in blood pressure in insulin-deprived
diabetics.
- source_sentence: What is the effect of hypertonic salt extracts (3 M KCl) from x-irradiation-induced
rat small bowel adenocarcinomas on the cytotoxicity of sensitized lymphoid cells
against allogeneic cultured tumor cells?
sentences:
- Hypertonic salt extracts (3 M KCl) from x-irradiation-induced rat small bowel
adenocarcinomas inhibit the cytotoxicity of sensitized lymphoid cells against
allogeneic cultured tumor cells by acting at both the effector and target cell
levels.
- An immunological method allows for the determination of the specific activity
of pure enzymes and quantifies cross-reactivity between plasmid-coded beta-galactosidases
and Escherichia coli beta-galactosidase.
- Intonation was found to have no effect on short-term memory, as participants retained
both intonation and words equally well.
- source_sentence: The ionic currents in the tunicate egg Halocynthia roretzi were
found to be solely dependent on potassium concentration, with no influence from
sodium.
sentences:
- The findings indicated that the ionic currents were completely abolished by the
replacement of sodium with potassium ions, showing no other components.
- In rats, the injection of (1-bromoethyl)benzene produces only N-acetyl-S-1-phenylethylcysteine
in urine, while (2-bromoethyl)benzene results in N-acetyl-S-2-phenylethylcysteine
and N-acetyl-S-(2-phenyl-2-hydroxyethyl)cysteine, indicating that styrene or styrene
oxide does not form as intermediates from these alkylhalides.
- The cattle tick Hyalomma anatolicum exhibits varying radiation tolerance levels,
with unfed adults tolerating up to 1000 R for engorgement and reproduction, while
engorged females can tolerate 10,000 R for oviposition, but higher doses inhibit
egg-laying.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jaimevera1107/all-MiniLM-L6-v2-pubmed")
# Run inference
sentences = [
'The ionic currents in the tunicate egg Halocynthia roretzi were found to be solely dependent on potassium concentration, with no influence from sodium.',
'The findings indicated that the ionic currents were completely abolished by the replacement of sodium with potassium ions, showing no other components.',
'The cattle tick Hyalomma anatolicum exhibits varying radiation tolerance levels, with unfed adults tolerating up to 1000 R for engorgement and reproduction, while engorged females can tolerate 10,000 R for oviposition, but higher doses inhibit egg-laying.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 67,560 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 73.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 61.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.69</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Nonstimulated lymphocytes infected with measles virus do not express detectable virus antigens, but upon stimulation with phytohemagglutinin, the virus can be reactivated, leading to increased virus production and cell death.</code> | <code>Nonstimulated lymphocytes infected with measles virus produce detectable virus antigens immediately upon stimulation with phytohemagglutinin.</code> | <code>0.5</code> |
| <code>In vitro studies of axillary lymph node cells in patients with breast cancer. A total of 170 axillary lymph nodes were obtained from fresh mastectomy specimens from 81 women with breast cancer. Lymph node cells were tested in vitro for T and B cells by the rosette technique and immunofluorescence microscopy and for functional capacity by response to the mitogens phytohemagglutinin (PHA) and concanavalin A. T cells showed a wide range of relative values: 32-80 percent, with a mean of 63.5 percent. B cells defined by the presence of surface immunoglobulins ranged from 14 to 61 percent (mean, 35.8 percent); those defined by the presence of C3 receptors, from 8 to 54 percent (mean, 24.9 percent); and those defined by the presence of IgG-specific (Fc) receptors, from 10 to 45 percent (mean, 27.5 percent). Cells with the C3 and Fc receptors constituted approximately two-thirds of the cells not binding spontaneously to sheep red blood cells (non-SRBC-R), whereas virtually all non-SRBC-R stain...</code> | <code>In a study of axillary lymph nodes from 81 breast cancer patients, T and B cell proportions varied significantly by age, metastatic status, and lymph node morphology, with older patients and nodes with metastasis showing higher B cell and lower T cell percentages.</code> | <code>1.0</code> |
| <code>Pharmacologic treatment of disorders of bladder and urethra: a review. The use of pharmacologic agents in treating disorders of the bladder and proximal urethra has expanded because of new knowledge gained in the past few years. A better understanding of the properties of these organs as they relate to drugs has contributed to this expansion. The authors present their experience with a number of drugs in treating disorders of the detrusor muscle and proximal urethra, and they briefly review the literature.</code> | <code>Recent advancements in understanding bladder and urethra properties have led to an expanded use of pharmacologic agents for treating related disorders.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 4
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.2367 | 500 | 0.065 |
| 0.4735 | 1000 | 0.041 |
| 0.7102 | 1500 | 0.0356 |
| 0.9470 | 2000 | 0.0319 |
| 1.1837 | 2500 | 0.0287 |
| 1.4205 | 3000 | 0.0272 |
| 1.6572 | 3500 | 0.0262 |
| 1.8939 | 4000 | 0.0263 |
| 2.1307 | 4500 | 0.0252 |
| 2.3674 | 5000 | 0.0228 |
| 2.6042 | 5500 | 0.0225 |
| 2.8409 | 6000 | 0.0221 |
| 3.0777 | 6500 | 0.0219 |
| 3.3144 | 7000 | 0.02 |
| 3.5511 | 7500 | 0.0198 |
| 3.7879 | 8000 | 0.0203 |
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu118
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mteojackson/mateojackson | mteojackson | 2025-05-24T00:00:00Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-23T22:37:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
winglian/codeforces-cot-distill-7b-v1 | winglian | 2025-05-23T23:47:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:winglian/codeforces-cot-16k-context-topk64-prepared",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-23T23:46:28Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- generated_from_trainer
datasets:
- winglian/codeforces-cot-16k-context-topk64-prepared
model-index:
- name: outputs/out-kd-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
plugins:
- axolotl.integrations.kd.KDPlugin
- axolotl.integrations.liger.LigerPlugin
liger_rms_norm: true
liger_glu_activation: true
# torch_compile: true
strict: false
chat_template_jinja: "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n<think>' }}\n{%- endif %}\n"
kd_trainer: true
kd_ce_alpha: 0.1
kd_alpha: 0.8
kd_temperature: 2.0
dataloader_prefetch_factor: 256
dataloader_num_workers: 4
dataloader_pin_memory: true
gc_steps: -1 # gc at the end of each epoch
datasets:
- field_messages: messages
message_field_content: content
message_field_role: role
logprobs_field: target_logprobs
path: winglian/codeforces-cot-16k-context-topk64-prepared
type: axolotl.integrations.kd.chat_template
split: train
temperature: 1.0
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./outputs/out-kd-7b
skip_prepare_dataset: false
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
wandb_project: kd-7b-codeforces
wandb_entity: axolotl-ai
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 4
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: rex
learning_rate: 4e-5
save_safetensors: true
train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 120
evals_per_epoch:
eval_table_size:
saves_per_epoch: 1
debug:
weight_decay: 0.0
special_tokens:
pad_token: <|endoftext|>
deepspeed: deepspeed_configs/zero2_torch_compile.json
```
</details><br>
# outputs/out-kd-7b
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the winglian/codeforces-cot-16k-context-topk64-prepared dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 120
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Triangle104/QwQ-32B-ArliAI-RpR-v4-Q5_K_S-GGUF | Triangle104 | 2025-05-23T23:46:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v4",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-23T23:22:38Z | ---
license: apache-2.0
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg
language:
- en
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v4
library_name: transformers
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/QwQ-32B-ArliAI-RpR-v4-Q5_K_S-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v4`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing
dataset used for RPMax, with a focus on variety to ensure high
creativity and minimize cross-context repetition. Users familiar with
RPMax will recognize the unique, non-repetitive writing style unlike
other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source
reasoning model that can be easily trained, it was clear that the
available instruct and creative writing reasoning datasets contains only
one response per example. This is type of single response dataset used
for training reasoning models causes degraded output quality in long
multi-turn chats. Which is why Arli AI decided to create a real RP model
capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning
RP dataset by re-processing our existing known-good RPMax dataset into a
reasoning dataset. This was possible by using the base QwQ Instruct
model itself to create the reasoning process for every turn in the RPMax
dataset conversation examples, which is then further refined in order
to make sure the reasoning is in-line with the actual response examples
from the dataset.
Another important thing to get right is to make sure the model is
trained on examples that present reasoning blocks in the same way as it
encounters it during inference. Which is, never seeing the reasoning
blocks in it's context. In order to do this, the training run was
completed using axolotl with manual template-free segments dataset in
order to make sure that the model is never trained to see the reasoning
block in the context. Just like how the model will be used during
inference time.
The result of training QwQ on this dataset with this method are
consistently coherent and interesting outputs even in long multi-turn RP
chats. This is as far as we know the first true correctly-trained
reasoning model trained for RP and creative writing.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q5_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v4-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q5_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v4-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q5_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v4-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v4-Q5_K_S-GGUF --hf-file qwq-32b-arliai-rpr-v4-q5_k_s.gguf -c 2048
```
|
vijil/mbert-prompt-injection | vijil | 2025-05-23T23:34:48Z | 199 | 0 | null | [
"safetensors",
"modernbert",
"injection",
"security",
"llm",
"prompt-injection",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-04T07:03:11Z | ---
license: apache-2.0
tags:
- injection
- security
- llm
- prompt-injection
---
# Model Card for Vijil Prompt Injection
## Model Details
### Model Description
This model is a fine-tuned version of ModernBert to classify prompt-injection prompts which can manipulate language models into producing unintended outputs.
- **Developed by:** Vijil AI
- **License:** apache-2.0
- **Finetuned version of [ModernBERT](https://huggingface.co/docs/transformers/en/model_doc/modernbert)**
## Uses
Prompt injection attacks manipulate language models by inserting or altering prompts to trigger harmful or unintended responses.
The vijil/mbert-prompt-injection model is designed to enhance security in language model applications by detecting prompt-injection attacks.
## How to Get Started with the Model
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
model = AutoModelForSequenceClassification.from_pretrained("vijil/mbert-prompt-injection")
classifier = pipeline(
"text-classification",
model=model,
tokenizer=tokenizer,
truncation=True,
max_length=512,
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
print(classifier("this is a prompt-injection prompt"))
```
## Training Details
### Training Data
The dataset used for training the model was taken from
[wildguardmix/train](https://huggingface.co/datasets/allenai/wildguardmix)
and
[safe-guard-prompt-injection/train](https://huggingface.co/datasets/xTRam1/safe-guard-prompt-injection)
### Training Procedure
Supervised finetuning with above dataset
#### Training Hyperparameters
* learning_rate: 5e-05
* train_batch_size: 32
* eval_batch_size: 32
* optimizer: adamw_torch_fused
* lr_scheduler_type: cosine_with_restarts
* warmup_ratio: 0.1
* num_epochs: 3
## Evaluation
* Training Loss: 0.0036
* Validation Loss: 0.209392
* Accuracy: 0.961538
* Precision: 0.958362
* Recall: 0.957055
* Fl: 0.957708
#### Testing Data
The dataset used for training the model was taken from
[wildguardmix/test](https://huggingface.co/datasets/allenai/wildguardmix)
and
[safe-guard-prompt-injection/test](https://huggingface.co/datasets/xTRam1/safe-guard-prompt-injection)
### Results
## Model Card Contact
https://vijil.ai |
darkworkz/chanchan | darkworkz | 2025-05-23T23:28:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-23T20:31:33Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/chanchan_000500_00_20250523231623.png
text: chanchan wearing fantasy armor riding a black horse on a grassy plane with
snow peaked mountains in the background
- output:
url: sample/chanchan_000500_01_20250523231632.png
text: chanchan walking into a fantasy metropolis through massive gates
- output:
url: sample/chanchan_000500_02_20250523231642.png
text: chanchan walking a snowy mountain pass in the light of a setting sun
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: chanchan
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# chanchan
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `chanchan` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/1.3b-i1-GGUF | mradermacher | 2025-05-23T23:26:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:tatsu-lab/alpaca",
"base_model:Corianas/1.3b",
"base_model:quantized:Corianas/1.3b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-05-23T22:48:52Z | ---
base_model: Corianas/1.3b
datasets:
- tatsu-lab/alpaca
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Corianas/1.3b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/1.3b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ2_M.gguf) | i1-IQ2_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q4_0.gguf) | i1-Q4_0 | 0.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q4_1.gguf) | i1-Q4_1 | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/1.3b-i1-GGUF/resolve/main/1.3b.i1-Q6_K.gguf) | i1-Q6_K | 1.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
batmangiaicuuthegioi/bge-m3-finetune-context1024-step1000 | batmangiaicuuthegioi | 2025-05-23T23:07:51Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10959",
"loss:MultipleNegativesRankingLoss",
"dataset:batmangiaicuuthegioi/hard_examples_legal_zalo",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-23T23:06:55Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10959
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: Lập thiết kế xây dựng dự án nạo vét vùng nước cảng biển được quy
định như thế nào?
sentences:
- Điều 40. Lập thiết kế xây dựng. khoản 1. căn cứ báo cáo nghiên cứu khả thi dự
án được duyệt và quy định của hợp đồng dự án, nhà đầu tư lập thiết kế bản vẽ thi
công gửi cơ quan nhà nước có thẩm quyền theo quy định tại điều 24 nghị định này
để thống nhất trước khi phê duyệt và gửi sau khi phê duyệt xong để giám sát, kiểm
tra. việc thay đổi thiết kế bản vẽ thi công làm ảnh hưởng đến quy mô, tiêu chuẩn
kỹ thuật, tiến độ thực hiện dự án phải được sự chấp thuận bằng văn bản của cơ
quan nhà nước có thẩm quyền. Lập thiết kế xây dựng. khoản 2. việc lập, thẩm tra,
phê duyệt thiết kế bản vẽ thi công thực hiện theo quy định của pháp luật về xây
dựng.
- Điều 8. Thực hiện bán đấu giá. khoản 1. cuộc đấu giá được tổ chức tại trụ sở của
tổ chức đấu giá, doanh nghiệp mua bán nợ hoặc địa điểm khác theo thỏa thuận của
doanh nghiệp mua bán nợ và tổ chức đấu giá. Thực hiện bán đấu giá. khoản 2. trong
thời hạn quy định tại quy chế bán đấu giá, nhà đầu tư đủ điều kiện tham gia đấu
giá thực hiện đăng ký đấu giá và thực hiện nộp tiền đặt cọc. doanh nghiệp mua
bán nợ quyết định tỷ lệ đặt cọc của nhà đầu tư nhưng không thấp hơn 10% tổng giá
trị lô cổ phần kèm nợ phải thu tính theo giá khởi điểm. sau khi đăng ký đấu giá
và hoàn tất thủ tục đặt cọc, nhà đầu tư được tổ chức đấu giá cung cấp phiếu tham
dự đấu giá để thực hiện đặt giá mua (giá đấu). Thực hiện bán đấu giá. khoản 3.
trong thời hạn quy định tại quy chế bán đấu giá, các nhà đầu tư ghi giá đặt mua
(giá đấu) vào phiếu tham dự đấu giá và bỏ phiếu trực tiếp tại địa điểm tổ chức
đấu giá hoặc bỏ phiếu qua đường bưu điện theo quy định tại quy chế bán đấu giá
cổ phần. mỗi nhà đầu tư chỉ được cấp một phiếu tham dự đấu giá và chỉ được bỏ
một mức giá cho toàn bộ lô cổ phần kèm nợ phải thu.
- 'Điều 2. Giải thích từ ngữ. trong thông tư này, các từ ngữ dưới đây được hiểu
như sau: Giải thích từ ngữ. khoản 1. hệ thống thông tin duyên hải là hạ tầng mạng
viễn thông hàng hải do nhà nước đầu tư và giao cho công ty tnhh mtv thông tin
điện tử hàng hải việt nam quản lý, khai thác. Giải thích từ ngữ. điểm i) dịch
vụ kết nối thông tin ngành hàng hải: là dịch vụ vận hành mạng công nghệ thông
tin nội bộ (gọi tắt là mạng intranet hàng hải) nhằm kết nối, chia sẻ thông tin
chuyên ngành hàng hải do đài trung tâm xử lý thông tin hàng hải hà nội cung cấp.
Giải thích từ ngữ. khoản 3. vùng biển a1: là vùng biển nằm trong phạm vi phủ sóng
vô tuyến điện thoại của ít nhất một đài thông tin duyên hải vhf mà trong đó tàu
thuyền có khả năng báo động cấp cứu liên tục bằng gọi chọn số (vùng biển này có
bán kính cách đài thông tin duyên hải khoảng 30 hải lý). Giải thích từ ngữ. khoản
4. vùng biển a2: là vùng biển phía ngoài vùng biển a1 và trong phạm vi vùng phủ
sóng vô tuyến điện thoại của ít nhất một đài thông tin duyên hải mf mà trong đó
tàu thuyền có khả năng báo động cấp cứu liên tục bằng gọi chọn số (vùng biển này
có bán kính cách đài thông tin duyên hải khoảng 250 hải lý). Giải thích từ ngữ.
khoản 5. vùng biển a3: là vùng biển phía ngoài vùng biển a1, a2 và trong phạm
vi phủ sóng của vệ tinh địa tĩnh inmarsat mà trong đó tàu thuyền có khả năng báo
động cấp cứu liên tục (vùng biển này có phạm vi từ vĩ tuyến 70° bắc đến vĩ tuyến
70° nam). Giải thích từ ngữ. khoản 6. vùng biển a4: là vùng ngoài vùng biển a1,
a2 và a Giải thích từ ngữ. khoản 3. bản chất là các vùng cực của trái đất từ vĩ
tuyến 70° bắc đến cực bắc và từ vĩ tuyến 70° nam đến cực nam nhưng không gồm bất
kỳ các vùng biển khác. Giải thích từ ngữ. khoản 7. thông tin lrit: là thông tin
về mã nhận dạng, vị trí, thời gian xác định vị trí của tàu thuyền theo giờ quốc
tế (utc) được phát ra từ thiết bị lrit. Giải thích từ ngữ. khoản 8. vùng thông
tin nhận dạng và truy theo tầm xa của việt nam (sau đây gọi tắt là vùng thông
tin lrit của việt nam): là vùng thông tin do bộ giao thông vận tải tổ chức công
bố theo quy định của pháp luật việt nam và điều ước quốc tế liên quan mà việt
nam là thành viên, bao gồm: vùng nội thủy lrit, vùng lãnh hải lrit, vùng Giải
thích từ ngữ. khoản 1.000 hải lý và vùng lrit tự chọn. Giải thích từ ngữ. khoản
9. đơn vị trên bờ là các đơn vị liên quan đến công tác tiếp nhận, xử lý thông
tin báo động cấp cứu, khẩn cấp, an toàn - an ninh, thông thường là các cơ quan
phối hợp tìm kiếm, cứu nạn, các đài thông tin duyên hải, chủ tàu. Giải thích từ
ngữ. khoản 10. các từ viết tắt:'
- source_sentence: Xử phạt công chứng viên nhận hướng dẫn tập sự khi không đủ điều
kiện theo quy định được quy định như thế nào?
sentences:
- Điều 47. Thủ tục cấp chứng chỉ hành nghề đo đạc và bản đồ. khoản 1. sau thời hạn
10 ngày làm việc kể từ ngày đăng tải kết quả sát hạch và xét cấp chứng chỉ hành
nghề theo quy định tại khoản 6 điều 44 nghị định này, thủ trưởng cơ quan có thẩm
quyền cấp chứng chỉ ký, cấp chứng chỉ hành nghề đo đạc và bản đồ. Thủ tục cấp
chứng chỉ hành nghề đo đạc và bản đồ. khoản 2. trường hợp cá nhân không đủ điều
kiện cấp chứng chỉ hành nghề đo đạc và bản đồ, cơ quan có thẩm quyền cấp chứng
chỉ phải thông báo, nêu rõ lý do không cấp chứng chỉ cho cá nhân đề nghị cấp chứng
chỉ.
- Điều 13. Tập sự hành nghề đấu giá. khoản 1. người có giấy chứng nhận tốt nghiệp
đào tạo nghề đấu giá và người được miễn đào tạo nghề đấu giá được tập sự hành
nghề đấu giá tại tổ chức đấu giá tài sản. Tập sự hành nghề đấu giá. khoản 2. thời
gian tập sự hành nghề đấu giá là 06 tháng. thời gian tập sự hành nghề đấu giá
được tính từ ngày tổ chức đấu giá tài sản thông báo danh sách người tập sự hành
nghề đấu giá tại tổ chức mình cho sở tư pháp nơi tổ chức đấu giá tài sản có trụ
sở. Tập sự hành nghề đấu giá. khoản 3. tổ chức đấu giá tài sản phân công đấu giá
viên hướng dẫn người tập sự hành nghề đấu giá. đấu giá viên hướng dẫn tập sự phải
hướng dẫn, giám sát và chịu trách nhiệm về các công việc do người tập sự thực
hiện. người tập sự hành nghề đấu giá được hướng dẫn các kỹ năng hành nghề và thực
hiện các công việc liên quan đến đấu giá tài sản do đấu giá viên hướng dẫn phân
công và chịu trách nhiệm trước đấu giá viên hướng dẫn về những công việc đó. người
tập sự hành nghề đấu giá không được điều hành cuộc đấu giá. Tập sự hành nghề đấu
giá. khoản 4. người hoàn thành thời gian tập sự quy định tại khoản 2 điều này
được tham dự kiểm tra kết quả tập sự hành nghề đấu giá. nội dung kiểm tra kết
quả tập sự hành nghề đấu giá bao gồm kỹ năng hành nghề đấu giá, pháp luật về đấu
giá tài sản, pháp luật có liên quan, quy tắc đạo đức nghề nghiệp đấu giá viên.
Tập sự hành nghề đấu giá. khoản 5. việc kiểm tra kết quả tập sự hành nghề đấu
giá do hội đồng kiểm tra kết quả tập sự hành nghề đấu giá thực hiện. bộ tư pháp
thành lập hội đồng kiểm tra kết quả tập sự hành nghề đấu giá; thành phần hội đồng
bao gồm đại diện bộ tư pháp làm chủ tịch, đại diện các cơ quan, tổ chức có liên
quan và một số đấu giá viên là thành viên.
- Điều 15. Hành vi vi phạm quy định hoạt động hành nghề công chứng. điểm d) tham
gia không đầy đủ nghĩa vụ bồi dưỡng nghiệp vụ công chứng hằng năm theo quy định.
Hành vi vi phạm quy định hoạt động hành nghề công chứng. điểm n) từ chối hướng
dẫn tập sự hành nghề công chứng không có lý do chính đáng. Hành vi vi phạm quy
định hoạt động hành nghề công chứng. điểm q) công chứng hợp đồng, giao dịch trong
trường hợp không có căn cứ xác định quyền sử dụng riêng, quyền sở hữu riêng đối
với tài sản khi tham gia hợp đồng, giao dịch. Hành vi vi phạm quy định hoạt động
hành nghề công chứng. điểm s) quảng cáo trên các phương tiện thông tin đại chúng
về công chứng viên và tổ chức mình. Hành vi vi phạm quy định hoạt động hành nghề
công chứng. điểm b) hành nghề trong thời gian bị tạm đình chỉ hành nghề công chứng
hoặc trong thời gian bị tước quyền sử dụng thẻ công chứng viên. Hành vi vi phạm
quy định hoạt động hành nghề công chứng. điểm d) góp vốn, nhận góp vốn thành lập,
duy trì tổ chức và hoạt động văn phòng công chứng không đúng quy định. Hành vi
vi phạm quy định hoạt động hành nghề công chứng. điểm c) chuyển nhượng hoặc nhận
chuyển nhượng văn phòng công chứng khi văn phòng công chứng hoạt động chưa đủ
02 năm. Hành vi vi phạm quy định hoạt động hành nghề công chứng. điểm c) tịch
thu tang vật là quyết định bổ nhiệm, bổ nhiệm lại công chứng viên hoặc thẻ công
chứng viên bị tẩy xoá, sửa chữa làm sai lệch nội dung đối với hành vi vi phạm
quy định tại điểm h khoản 4 điều này; giấy tờ, văn bản bị tẩy xoá, sửa chữa làm
sai lệch nội dung đối với hành vi vi phạm quy định tại điểm m khoản 2 điều này.
Hành vi vi phạm quy định hoạt động hành nghề công chứng. điểm c) buộc tổ chức
hành nghề công chứng đang lưu trữ hồ sơ công chứng thông báo cho cơ quan, tổ chức,
cá nhân có quyền, nghĩa vụ liên quan về hành vi vi phạm quy định tại các điểm
m và q khoản 3, các điểm a, b, d, đ, e, g, p và q khoản 4, khoản 5, các điểm b
và c khoản 6 điều này.
- source_sentence: Mức phạt đối với xe ô tô gắn biển số bị bẻ cong, bị che lấp, bị
hỏng
sentences:
- Điều 16. Xử phạt người điều khiển xe ô tô (bao gồm cả rơ moóc hoặc sơ mi rơ moóc
được kéo theo) và các loại xe tương tự xe ô tô vi phạm quy định về điều kiện của
phương tiện khi tham gia giao thông. khoản 1. phạt tiền từ < mức phạt tiền > đến
< mức phạt tiền > đối với hành vi điều khiển xe không có kính chắn gió hoặc có
nhưng vỡ hoặc có nhưng không có tác dụng (đối với xe có thiết kế lắp kính chắn
gió). Xử phạt người điều khiển xe ô tô (bao gồm cả rơ moóc hoặc sơ mi rơ moóc
được kéo theo) và các loại xe tương tự xe ô tô vi phạm quy định về điều kiện của
phương tiện khi tham gia giao thông. điểm c) điều khiển xe không có bộ phận giảm
thanh, giảm khói hoặc có nhưng không có tác dụng, không bảo đảm quy chuẩn môi
trường về khí thải, tiếng ồn. Xử phạt người điều khiển xe ô tô (bao gồm cả rơ
moóc hoặc sơ mi rơ moóc được kéo theo) và các loại xe tương tự xe ô tô vi phạm
quy định về điều kiện của phương tiện khi tham gia giao thông. điểm e) điều khiển
xe ô tô kinh doanh vận tải hành khách lắp thêm hoặc tháo bớt ghế, giường nằm hoặc
có kích thước khoang chở hành lý (hầm xe) không đúng thiết kế của nhà sản xuất
hoặc thiết kế đã đăng ký với cơ quan đăng ký xe hoặc thiết kế cải tạo đã được
cơ quan có thẩm quyền phê duyệt. Xử phạt người điều khiển xe ô tô (bao gồm cả
rơ moóc hoặc sơ mi rơ moóc được kéo theo) và các loại xe tương tự xe ô tô vi phạm
quy định về điều kiện của phương tiện khi tham gia giao thông. điểm đ) điều khiển
xe không đủ hệ thống hãm hoặc có đủ hệ thống hãm nhưng không có tác dụng, không
đúng tiêu chuẩn an toàn kỹ thuật. Xử phạt người điều khiển xe ô tô (bao gồm cả
rơ moóc hoặc sơ mi rơ moóc được kéo theo) và các loại xe tương tự xe ô tô vi phạm
quy định về điều kiện của phương tiện khi tham gia giao thông. điểm e) điều khiển
xe không có giấy chứng nhận hoặc tem kiểm định an toàn kỹ thuật và bảo vệ môi
trường (đối với loại xe có quy định phải kiểm định, trừ xe đăng ký tạm thời) hoặc
có nhưng đã hết hạn sử dụng từ 01 tháng trở lên (kể cả rơ moóc và sơ mi rơ moóc).
Xử phạt người điều khiển xe ô tô (bao gồm cả rơ moóc hoặc sơ mi rơ moóc được kéo
theo) và các loại xe tương tự xe ô tô vi phạm quy định về điều kiện của phương
tiện khi tham gia giao thông. điểm đ) thực hiện hành vi quy định tại điểm a khoản
4, điểm đ khoản 5 điều này trong trường hợp không có giấy đăng ký xe hoặc sử dụng
giấy đăng ký xe không do cơ quan có thẩm quyền cấp, không đúng số khung, số máy
của xe hoặc bị tẩy xóa (kể cả rơ moóc và sơ mi rơ moóc) mà không chứng minh được
nguồn gốc xuất xứ của phương tiện (không có giấy tờ, chứng từ chuyển quyền sở
hữu xe hoặc giấy tờ, chứng từ nguồn gốc xe hợp pháp) thì bị tịch thu phương tiện.
Xử phạt người điều khiển xe ô tô (bao gồm cả rơ moóc hoặc sơ mi rơ moóc được kéo
theo) và các loại xe tương tự xe ô tô vi phạm quy định về điều kiện của phương
tiện khi tham gia giao thông. điểm b) thực hiện hành vi quy định tại điểm a, điểm
e khoản 3 điều này buộc phải lắp đầy đủ thiết bị hoặc khôi phục lại tính năng
kỹ thuật của thiết bị theo quy định, tháo bỏ những thiết bị lắp thêm không đúng
quy định.
- Điều 94. Vi phạm quy định liên quan tới thư điện tử, tin nhắn cung cấp thông tin
về sản phẩm, dịch vụ. khoản 1. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền
> đối với hành vi cung cấp số điện thoại liên hệ trong các biển quảng cáo, rao
vặt được treo, đặt, dán, vẽ các sản phẩm quảng cáo trên cột điện, trụ điện, cột
tín hiệu giao thông, bờ tường, cây xanh, nơi công cộng. Vi phạm quy định liên
quan tới thư điện tử, tin nhắn cung cấp thông tin về sản phẩm, dịch vụ. điểm b)
gắn nhãn thư điện tử quảng cáo, tin nhắn quảng cáo không đúng hoặc không đầy đủ
theo quy định. Vi phạm quy định liên quan tới thư điện tử, tin nhắn cung cấp thông
tin về sản phẩm, dịch vụ. điểm c) gửi tin nhắn quảng cáo, thư điện tử quảng cáo,
tin nhắn qua mạng internet khi chưa được cấp mã số quản lý hoặc có mã số quản
lý không đúng mã số quản lý được bộ thông tin và truyền thông cấp. Vi phạm quy
định liên quan tới thư điện tử, tin nhắn cung cấp thông tin về sản phẩm, dịch
vụ. điểm o) thực hiện không đầy đủ các yêu cầu điều phối, ngăn chặn, xử lý tin
nhắn rác. Vi phạm quy định liên quan tới thư điện tử, tin nhắn cung cấp thông
tin về sản phẩm, dịch vụ. điểm đ) không thực hiện các biện pháp đánh giá tình
trạng tin nhắn rác trên mạng viễn thông di động của nhà cung cấp dịch vụ tin nhắn
theo hướng dẫn của bộ thông tin và truyền thông. Vi phạm quy định liên quan tới
thư điện tử, tin nhắn cung cấp thông tin về sản phẩm, dịch vụ. điểm đ) số dịch
vụ gọi tự do, số dịch vụ gọi giá cao được mở chiều gọi đi hoặc để gửi tin nhắn
hoặc nhận tin nhắn. Vi phạm quy định liên quan tới thư điện tử, tin nhắn cung
cấp thông tin về sản phẩm, dịch vụ. khoản 7. phạt tiền từ < mức phạt tiền > đến
< mức phạt tiền > đối với hành vi quảng cáo bằng thư điện tử hoặc quảng cáo bằng
tin nhắn hoặc cung cấp dịch vụ nhắn tin qua mạng internet nhưng không có hệ thống
tiếp nhận, xử lý yêu cầu từ chối của người nhận. Vi phạm quy định liên quan tới
thư điện tử, tin nhắn cung cấp thông tin về sản phẩm, dịch vụ. khoản 8. phạt tiền
từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi không ngăn chặn, thu
hồi số thuê bao được dùng để phát tán tin nhắn rác. Vi phạm quy định liên quan
tới thư điện tử, tin nhắn cung cấp thông tin về sản phẩm, dịch vụ. điểm b) tước
quyền sử dụng mã số quản lý, tên định danh từ 01 tháng đến 03 tháng đối với hành
vi vi phạm quy định tại các điểm a và b khoản 3, các điểm d, g, h, i và o khoản
4, các điểm a và b khoản 6 điều này. Vi phạm quy định liên quan tới thư điện tử,
tin nhắn cung cấp thông tin về sản phẩm, dịch vụ. điểm b) buộc thu hồi đầu số,
kho số viễn thông do thực hiện hành vi vi phạm tại điểm h khoản 4, các điểm b
và c khoản 5 và khoản 6 điều này.
- 'Điều 23. Xử phạt người điều khiển xe ô tô chở hành khách, ô tô chở người và các
loại xe tương tự xe ô tô chở hành khách, chở người vi phạm quy định về vận tải
đường bộ. điểm b) không mặc đồng phục, không đeo thẻ tên của lái xe theo quy định.
Xử phạt người điều khiển xe ô tô chở hành khách, ô tô chở người và các loại xe
tương tự xe ô tô chở hành khách, chở người vi phạm quy định về vận tải đường bộ.
khoản 2. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > trên mỗi người vượt
quá quy định được phép chở của phương tiện nhưng tổng mức phạt tiền tối đa không
vượt quá < mức phạt tiền > đối với người điều khiển xe ô tô chở hành khách, ô
tô chở người (trừ xe buýt) thực hiện hành vi vi phạm: chở quá từ 02 người trở
lên trên xe đến 9 chỗ, chở quá từ 03 người trở lên trên xe 10 chỗ đến xe 15 chỗ,
chở quá từ 04 người trở lên trên xe 16 chỗ đến xe 30 chỗ, chở quá từ 05 người
trở lên trên xe trên 30 chỗ, trừ các hành vi vi phạm quy định tại khoản 4 điều
này. Xử phạt người điều khiển xe ô tô chở hành khách, ô tô chở người và các loại
xe tương tự xe ô tô chở hành khách, chở người vi phạm quy định về vận tải đường
bộ. điểm p) điều khiển xe taxi sử dụng phần mềm tính tiền mà trên xe không có
thiết bị để kết nối trực tiếp với hành khách theo quy định. Xử phạt người điều
khiển xe ô tô chở hành khách, ô tô chở người và các loại xe tương tự xe ô tô chở
hành khách, chở người vi phạm quy định về vận tải đường bộ. khoản 4. phạt tiền
từ < mức phạt tiền > đến < mức phạt tiền > trên mỗi người vượt quá quy định được
phép chở của phương tiện nhưng tổng mức phạt tiền tối đa không vượt quá < mức
phạt tiền > đối với người điều khiển xe ô tô chở hành khách chạy tuyến có cự ly
lớn hơn 300 km thực hiện hành vi vi phạm: chở quá từ 02 người trở lên trên xe
đến 9 chỗ, chở quá từ 03 người trở lên trên xe 10 chỗ đến xe 15 chỗ, chở quá từ
04 người trở lên trên xe 16 chỗ đến xe 30 chỗ, chở quá từ 05 người trở lên trên
xe trên 30 chỗ. Xử phạt người điều khiển xe ô tô chở hành khách, ô tô chở người
và các loại xe tương tự xe ô tô chở hành khách, chở người vi phạm quy định về
vận tải đường bộ. điểm q) điều khiển xe vận chuyển khách du lịch, xe vận chuyển
hành khách theo hợp đồng sử dụng hợp đồng điện tử không có thiết bị để truy cập
được nội dung của hợp đồng điện tử và danh sách hành khách hoặc có nhưng không
cung cấp cho lực lượng chức năng khi có yêu cầu, chở người không có tên trong
danh sách hành khách hoặc vận chuyển không đúng đối tượng theo quy định (đối với
xe kinh doanh vận tải hành khách theo hợp đồng vận chuyển học sinh, sinh viên,
cán bộ công nhân viên đi học, đi làm việc). Xử phạt người điều khiển xe ô tô chở
hành khách, ô tô chở người và các loại xe tương tự xe ô tô chở hành khách, chở
người vi phạm quy định về vận tải đường bộ. điểm e) điều khiển xe chở hành khách
liên vận quốc tế không có hoặc không gắn ký hiệu phân biệt quốc gia, phù hiệu
liên vận theo quy định hoặc có nhưng đã hết giá trị sử dụng hoặc sử dụng phù hiệu
không do cơ quan có thẩm quyền cấp. Xử phạt người điều khiển xe ô tô chở hành
khách, ô tô chở người và các loại xe tương tự xe ô tô chở hành khách, chở người
vi phạm quy định về vận tải đường bộ. điểm b) điều khiển xe chở hành khách không
có hoặc không gắn phù hiệu (biển hiệu) theo quy định hoặc có nhưng đã hết giá
trị sử dụng hoặc sử dụng phù hiệu (biển hiệu) không do cơ quan có thẩm quyền cấp.
Xử phạt người điều khiển xe ô tô chở hành khách, ô tô chở người và các loại xe
tương tự xe ô tô chở hành khách, chở người vi phạm quy định về vận tải đường bộ.
điểm d) thực hiện hành vi quy định tại điểm e khoản 6, điểm b khoản 7 điều này
bị tịch thu phù hiệu (biển hiệu) đã hết giá trị sử dụng hoặc không do cơ quan
có thẩm quyền cấp. Xử phạt người điều khiển xe ô tô chở hành khách, ô tô chở người
và các loại xe tương tự xe ô tô chở hành khách, chở người vi phạm quy định về
vận tải đường bộ. điểm b) thực hiện hành vi quy định tại điểm l khoản 3 điều này
(trường hợp thu tiền vé cao hơn quy định) buộc phải nộp lại số lợi bất hợp pháp
có được do thực hiện vi phạm hành chính.'
- source_sentence: Tiêu chuẩn của thành viên Hội đồng tư vấn chuyên môn đánh giá nguyên
nhân tai biến nặng trong quá trình sử dụng vắc xin cấp Bộ được quy định như thế
nào?
sentences:
- 'Điều 27. Thanh toán chi phí khám bệnh, chữa bệnh một số trường hợp. khoản 1.
thanh toán chi phí khám bệnh, chữa bệnh đối với trẻ em dưới 6 tuổi trong trường
hợp chưa có thẻ bảo hiểm y tế: cơ sở khám bệnh, chữa bệnh tổng hợp danh sách trẻ
em dưới 6 tuổi và chi phí khám bệnh, chữa bệnh bảo hiểm y tế theo phạm vi được
hưởng và mức hưởng gửi cơ quan bảo hiểm xã hội thanh toán theo quy định. cơ quan
bảo hiểm xã hội căn cứ danh sách số trẻ em đã được khám bệnh, chữa bệnh do cơ
sở khám bệnh, chữa bệnh chuyển đến, có trách nhiệm kiểm tra, xác minh việc cấp
thẻ bảo hiểm y tế cho trẻ; thực hiện thanh toán chi phí khám bệnh, chữa bệnh.
trường hợp trẻ em chưa được cấp thẻ thì thực hiện cấp thẻ theo quy định. Thanh
toán chi phí khám bệnh, chữa bệnh một số trường hợp. khoản 2. thanh toán chi phí
khám bệnh, chữa bệnh đối với người đã hiến bộ phận cơ thể người phải điều trị
ngay sau khi hiến mà chưa có thẻ bảo hiểm y tế: cơ sở khám bệnh, chữa bệnh sau
khi lấy bộ phận cơ thể người có trách nhiệm tổng hợp danh sách số người đã hiến
và chi phí khám bệnh, chữa bệnh theo phạm vi được hưởng và mức hưởng bảo hiểm
y tế sau khi hiến, gửi cơ quan bảo hiểm xã hội để thanh toán theo quy định. cơ
quan bảo hiểm xã hội căn cứ danh sách số người đã hiến bộ phận cơ thể đã được
khám bệnh, chữa bệnh sau khi hiến và chi phí do cơ sở khám bệnh, chữa bệnh chuyển
đến để thực hiện thanh toán, cấp thẻ bảo hiểm y tế. Thanh toán chi phí khám bệnh,
chữa bệnh một số trường hợp. điểm c) trường hợp người bệnh có số tiền cùng chi
trả vượt quá 06 tháng lương cơ sở được tính từ ngày 01 tháng 01, quỹ bảo hiểm
y tế thanh toán 100% chi phí khám bệnh chữa bệnh trong phạm vi quyền lợi của người
bệnh kể từ thời điểm người bệnh tham gia đủ 05 năm liên tục đến hết ngày 31 tháng
12 của năm đó. Thanh toán chi phí khám bệnh, chữa bệnh một số trường hợp. khoản
4. trường hợp chuyển tuyến khám bệnh, chữa bệnh đối với người bệnh cần phải có
nhân viên y tế đi kèm và có sử dụng thuốc, vật tư y tế theo yêu cầu chuyên môn
trong quá trình vận chuyển, thì chi phí thuốc, vật tư y tế được tổng hợp vào chi
phí điều trị của cơ sở khám bệnh, chữa bệnh chỉ định chuyển tuyến. Thanh toán
chi phí khám bệnh, chữa bệnh một số trường hợp. khoản 5. trường hợp người bệnh
sau khi đã điều trị nội trú ổn định nhưng cần phải tiếp tục sử dụng thuốc sau
khi ra viện theo chỉ định của cơ sở khám bệnh, chữa bệnh theo quy định của bộ
trưởng bộ y tế, quỹ bảo hiểm y tế thanh toán chi phí thuốc trong phạm vi được
hưởng và mức hưởng theo chế độ quy định. cơ sở khám bệnh, chữa bệnh tổng hợp khoản
chi thuốc này vào chi phí khám bệnh, chữa bệnh của người bệnh trước khi ra viện.
Thanh toán chi phí khám bệnh, chữa bệnh một số trường hợp. khoản 6. trường hợp
cơ sở khám bệnh, chữa bệnh không thực hiện được xét nghiệm cận lâm sàng, chẩn
đoán hình ảnh, thăm dò chức năng và phải chuyển người bệnh hoặc mẫu bệnh phẩm
đến cơ sở khám bệnh, chữa bệnh bảo hiểm y tế hoặc cơ sở được cấp có thẩm quyền
phê duyệt đủ điều kiện thực hiện để thực hiện các dịch vụ đó, thì quỹ bảo hiểm
y tế thanh toán chi phí thực hiện dịch vụ trong phạm vi được hưởng và mức hưởng
theo quy định cho cơ sở khám bệnh, chữa bệnh nơi chuyển người bệnh, mẫu bệnh phẩm.
cơ sở khám bệnh, chữa bệnh chuyển người bệnh hoặc mẫu bệnh phẩm có trách nhiệm
thanh toán chi phí cho cơ sở khám bệnh, chữa bệnh hoặc đơn vị thực hiện dịch vụ,
sau đó tổng hợp vào chi phí khám bệnh, chữa bệnh của người bệnh để thanh toán
với cơ quan bảo hiểm xã hội. bộ trưởng bộ y tế quy định nguyên tắc, danh mục xét
nghiệm cận lâm sàng chẩn đoán hình ảnh, thăm dò chức năng được chuyển đến cơ sở
khám bệnh, chữa bệnh hoặc đơn vị thực hiện dịch vụ. Thanh toán chi phí khám bệnh,
chữa bệnh một số trường hợp. điểm c) đối với chi phí về thuốc, hóa chất, vật tư
y tế, quỹ bảo hiểm y tế thanh toán theo giá mua của cơ sở khám bệnh, chữa bệnh
theo quy định về đấu thầu. Thanh toán chi phí khám bệnh, chữa bệnh một số trường
hợp. khoản 8. thanh toán chi phí khám bệnh, chữa bệnh đối với trường hợp cơ sở
khám bệnh, chữa bệnh triển khai kỹ thuật, phương pháp mới đã được cấp có thẩm
quyền phê duyệt nhưng chưa có quy định về giá dịch vụ y tế thì cơ sở khám bệnh,
chữa bệnh phải xây dựng và trình cấp có thẩm quyền phê duyệt giá dịch vụ kỹ thuật
để làm căn cứ thanh toán. cơ sở khám bệnh, chữa bệnh có trách nhiệm thông báo
bằng văn bản cho cơ quan bảo hiểm xã hội về việc triển khai kỹ thuật, phương pháp
mới. Thanh toán chi phí khám bệnh, chữa bệnh một số trường hợp. khoản 9. trường
hợp người có thẻ bảo hiểm y tế đang điều trị nội trú tại cơ sở khám bệnh, chữa
bệnh nhưng thẻ bảo hiểm y tế hết hạn sử dụng thì được quỹ bảo hiểm y tế thanh
toán chi phí'
- 'Điều 35. Hồ sơ cấp Giấy chứng nhận đủ điều kiện hoạt động dịch vụ đánh giá công
nghệ. điểm d) tài liệu thuyết minh phương pháp, quy trình đánh giá công nghệ tương
ứng với từng lĩnh vực công nghệ cần đánh giá. Hồ sơ cấp Giấy chứng nhận đủ điều
kiện hoạt động dịch vụ đánh giá công nghệ. điểm b) danh sách sửa đổi, bổ sung
các chuyên gia đánh giá công nghệ, trong đó thể hiện các thông tin về tên, năm
sinh, trình độ, lĩnh vực đào tạo, số năm công tác trong lĩnh vực công nghệ cần
đánh giá, kèm theo các tài liệu liên quan đối với mỗi chuyên gia đánh giá công
nghệ gồm: thỏa thuận hợp tác giữa chuyên gia với tổ chức; bản sao chứng thực bằng
cấp theo quy định tại khoản 2 điều 33 nghị định này; tóm tắt quá trình công tác,
kinh nghiệm hoạt động đánh giá công nghệ và tài liệu chứng minh kinh nghiệm hoạt
động đánh giá công nghệ của chuyên gia. danh sách chuyên gia đánh giá công nghệ
bổ sung, sửa đổi của tổ chức và tóm tắt kinh nghiệm hoạt động đánh giá công nghệ
của chuyên gia đánh giá công nghệ bổ sung, sửa đổi theo mẫu số 07 và mẫu số 08
tại phụ lục iv ban hành kèm theo nghị định này. Hồ sơ cấp Giấy chứng nhận đủ điều
kiện hoạt động dịch vụ đánh giá công nghệ. điểm b) bản chính giấy chứng nhận bị
hư hỏng (nếu có) đối với trường hợp giấy chứng nhận bị hư hỏng.'
- Điều 2. Chức năng, nhiệm vụ, quyền hạn và cơ cấu thành viên của Hội đồng cấp Bộ.
khoản 1. chức năng của hội đồng cấp bộ tư vấn chuyên môn cho bộ trưởng bộ y tế
trong việc giải quyết các trường hợp tai biến nặng sau tiêm chủng theo quy định
tại điều 6 nghị định số 104/2016/nđ-cp ngày 01 tháng 7 năm 2016 của chính phủ
quy định về hoạt động tiêm chủng. Chức năng, nhiệm vụ, quyền hạn và cơ cấu thành
viên của Hội đồng cấp Bộ. điểm b) đánh giá lại kết luận của hội đồng cấp tỉnh
trong trường hợp có khiếu nại của tổ chức, cá nhân đối với kết luận của hội đồng
cấp tỉnh hoặc theo yêu cầu của cơ quan nhà nước có thẩm quyền. Chức năng, nhiệm
vụ, quyền hạn và cơ cấu thành viên của Hội đồng cấp Bộ. điểm b) được bảo đảm các
điều kiện để thực hiện nhiệm vụ. Chức năng, nhiệm vụ, quyền hạn và cơ cấu thành
viên của Hội đồng cấp Bộ. điểm d) trong trường hợp cần thiết và tùy theo từng
trường hợp cụ thể, chủ tịch hội đồng cấp bộ có thể mời thêm các chuyên gia về
tài chính, giám định pháp y, hồi sức cấp cứu, pháp luật và những lĩnh vực khác
liên quan đến tai biến nặng sau tiêm chủng tham gia hội đồng. Chức năng, nhiệm
vụ, quyền hạn và cơ cấu thành viên của Hội đồng cấp Bộ. điểm b) những người thực
hiện hoạt động về tiêm chủng thì không tham gia vào thành phần hội đồng.
- source_sentence: Bộ Thông tin và Truyền thông có trách nhiệm gì trong thực hiện
thủ tục hành chính điện tử?
sentences:
- Điều 6. Phóng viên nước ngoài đi theo đoàn khách nước ngoài. khoản 1. đối với
các phóng viên nước ngoài đi theo đoàn khách nước ngoài thăm việt nam theo lời
mời của lãnh đạo đảng và nhà nước hoặc bộ ngoại giao để đưa tin về chuyến thăm,
cơ quan chủ quản việt nam có trách nhiệm làm các thủ tục nhập - xuất cảnh cần
thiết và thông báo cho bộ ngoại giao biết để phối hợp. phóng viên nước ngoài được
phép đưa tin các hoạt động theo chương trình chính thức của đoàn khách nước ngoài.
trường hợp phóng viên nước ngoài có yêu cầu hoạt động thông tin, báo chí nằm ngoài
chương trình hoạt động chính thức của đoàn khách nước ngoài, phóng viên phải có
văn bản đề nghị gửi bộ ngoại giao và phải tuân thủ các quy định như đối với phóng
viên không thường trú quy định tại điều 4 và điều 5 của nghị định này. Phóng viên
nước ngoài đi theo đoàn khách nước ngoài. khoản 2. đối với phóng viên nước ngoài
đi theo đoàn khách nước ngoài theo lời mời của các cơ quan khác của việt nam để
đưa tin về chuyến thăm, cơ quan chủ quản việt nam cần làm thủ tục với bộ ngoại
giao như đối với phóng viên không thường trú và hoạt động dưới sự hướng dẫn của
trung tâm hướng dẫn báo chí nước ngoài (bộ ngoại giao) hoặc một cơ quan được bộ
ngoại giao chấp thuận.
- Điều 14. Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động
sản qua mạng internet, trang điện tử. điểm b) khai thác, sử dụng thông tin, dữ
liệu về nhà ở và thị trường bất động sản theo quy định của pháp luật được công
khai, phổ biến rộng rãi. Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị
trường bất động sản qua mạng internet, trang điện tử. điểm c) bên cung cấp gửi
cho bên yêu cầu tài khoản truy cập tra cứu thông tin, dữ liệu trong thời hạn không
quá 07 ngày làm việc kể từ khi nhận được yêu cầu hoặc thời điểm bên yêu cầu thanh
toán chi phí sử dụng dịch vụ (nếu có); Khai thác, sử dụng thông tin, dữ liệu về
nhà ở và thị trường bất động sản qua mạng internet, trang điện tử. điểm e) tuân
theo các quy định của pháp luật về bảo vệ bí mật nhà nước; chịu trách nhiệm về
sai phạm trong khai thác, sử dụng thông tin, dữ liệu. Khai thác, sử dụng thông
tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện
tử. điểm b) tiến hành các biện pháp khắc phục sự cố ngay sau khi hệ thống thông
tin của mình bị lỗi trong quá trình hoạt động làm ảnh hưởng hoặc gây ngừng cung
cấp thông tin, dữ liệu, dịch vụ có liên quan trên môi trường mạng. Khai thác,
sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet,
trang điện tử. khoản 5. việc cung cấp, khai thác, sử dụng thông tin, dữ liệu về
nhà ở và thị trường bất động sản qua mạng internet, trang điện tử phải tuân thủ
theo đúng các quy định của luật giao dịch điện tử, luật công nghệ thông tin và
các văn bản hướng dẫn thi hành.
- Điều 29. Bộ Thông tin và Truyền thông. khoản 1. hướng dẫn việc giám sát, đánh
giá hiệu quả, mức độ sử dụng dịch vụ công trực tuyến. Bộ Thông tin và Truyền thông.
khoản 2. hướng dẫn, hỗ trợ việc triển khai tích hợp chữ ký số công cộng trong
quá trình thực hiện dịch vụ công trực tuyến.
datasets:
- batmangiaicuuthegioi/hard_examples_legal_zalo
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: triplet
name: Triplet
dataset:
name: zalo legal
type: zalo_legal
metrics:
- type: cosine_accuracy
value: 0.997871458530426
name: Cosine Accuracy
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the [hard_examples_legal_zalo](https://huggingface.co/datasets/batmangiaicuuthegioi/hard_examples_legal_zalo) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [hard_examples_legal_zalo](https://huggingface.co/datasets/batmangiaicuuthegioi/hard_examples_legal_zalo)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("batmangiaicuuthegioi/bge-m3-finetune-context1024-step1000")
# Run inference
sentences = [
'Bộ Thông tin và Truyền thông có trách nhiệm gì trong thực hiện thủ tục hành chính điện tử?',
'Điều 29. Bộ Thông tin và Truyền thông. khoản 1. hướng dẫn việc giám sát, đánh giá hiệu quả, mức độ sử dụng dịch vụ công trực tuyến. Bộ Thông tin và Truyền thông. khoản 2. hướng dẫn, hỗ trợ việc triển khai tích hợp chữ ký số công cộng trong quá trình thực hiện dịch vụ công trực tuyến.',
'Điều 14. Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện tử. điểm b) khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản theo quy định của pháp luật được công khai, phổ biến rộng rãi. Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện tử. điểm c) bên cung cấp gửi cho bên yêu cầu tài khoản truy cập tra cứu thông tin, dữ liệu trong thời hạn không quá 07 ngày làm việc kể từ khi nhận được yêu cầu hoặc thời điểm bên yêu cầu thanh toán chi phí sử dụng dịch vụ (nếu có); Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện tử. điểm e) tuân theo các quy định của pháp luật về bảo vệ bí mật nhà nước; chịu trách nhiệm về sai phạm trong khai thác, sử dụng thông tin, dữ liệu. Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện tử. điểm b) tiến hành các biện pháp khắc phục sự cố ngay sau khi hệ thống thông tin của mình bị lỗi trong quá trình hoạt động làm ảnh hưởng hoặc gây ngừng cung cấp thông tin, dữ liệu, dịch vụ có liên quan trên môi trường mạng. Khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện tử. khoản 5. việc cung cấp, khai thác, sử dụng thông tin, dữ liệu về nhà ở và thị trường bất động sản qua mạng internet, trang điện tử phải tuân thủ theo đúng các quy định của luật giao dịch điện tử, luật công nghệ thông tin và các văn bản hướng dẫn thi hành.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `zalo_legal`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9979** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### hard_examples_legal_zalo
* Dataset: [hard_examples_legal_zalo](https://huggingface.co/datasets/batmangiaicuuthegioi/hard_examples_legal_zalo) at [6f0ea4d](https://huggingface.co/datasets/batmangiaicuuthegioi/hard_examples_legal_zalo/tree/6f0ea4d085af6d4bb6343930d368b28f2d18766d)
* Size: 10,959 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 23.52 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 379.61 tokens</li><li>max: 1366 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 429.1 tokens</li><li>max: 1366 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Phá hoại thông tin trên môi trường mạng bị xử phạt bao nhiêu tiền?</code> | <code>Điều 75. Vi phạm các quy định về cơ sở hạ tầng thông tin phục vụ ứng dụng và phát triển công nghệ thông tin. khoản 1. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi xâm phạm đến quyền, lợi ích hợp pháp của chủ sở hữu cơ sở dữ liệu hoặc cản trở việc sử dụng cơ sở dữ liệu của chủ sở hữu khi thực hiện tái sản xuất, phân phối, quảng bá, truyền đưa, cung cấp nội dung hợp thành cơ sở dữ liệu. Vi phạm các quy định về cơ sở hạ tầng thông tin phục vụ ứng dụng và phát triển công nghệ thông tin. khoản 2. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi phá hoại cơ sở hạ tầng thông tin hoặc phá hoại thông tin trên môi trường mạng. Vi phạm các quy định về cơ sở hạ tầng thông tin phục vụ ứng dụng và phát triển công nghệ thông tin. điểm b) buộc khôi phục lại tình trạng ban đầu đã bị thay đổi do thực hiện hành vi vi phạm quy định tại khoản 2 điều này.</code> | <code>Điều 41. Vi phạm quy định về nhận biết, phân loại khách hàng theo mức độ rủi ro. khoản 1. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi không áp dụng các biện pháp nhận biết khách hàng, biện pháp đánh giá tăng cường quy định tại các khoản 2, 3 và 4 điều 12 luật phòng, chống rửa tiền, điều 34 luật phòng, chống khủng bố. Vi phạm quy định về nhận biết, phân loại khách hàng theo mức độ rủi ro. khoản 2. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi không phân loại khách hàng theo mức độ rủi ro về rửa tiền và tài trợ khủng bố theo quy định của pháp luật.</code> |
| <code>Không báo cáo hoạt động trong các tổ chức quốc tế về bưu chính với cơ quan nhà nước có thẩm quyền bị phạt thế nào?</code> | <code>Điều 8. Vi phạm các quy định về cung ứng, sử dụng dịch vụ và báo cáo bưu chính. khoản 1. phạt cảnh cáo hoặc phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi cung cấp thông tin về bưu gửi không đầy đủ theo yêu cầu của dịch vụ. Vi phạm các quy định về cung ứng, sử dụng dịch vụ và báo cáo bưu chính. khoản 2. phạt tiền từ < mức phạt tiền > đến < mức phạt tiền > đối với hành vi cung cấp thông tin về bưu gửi không đúng theo yêu cầu của dịch vụ. Vi phạm các quy định về cung ứng, sử dụng dịch vụ và báo cáo bưu chính. điểm d) báo cáo chậm đến 15 ngày hoặc báo cáo không đầy đủ theo quy định hoặc không đúng theo yêu cầu của cơ quan nhà nước có thẩm quyền về bưu chính. Vi phạm các quy định về cung ứng, sử dụng dịch vụ và báo cáo bưu chính. điểm c) báo cáo chậm quá 15 ngày so với quy định. Vi phạm các quy định về cung ứng, sử dụng dịch vụ và báo cáo bưu chính. điểm c) báo cáo không chính xác, không trung thực hoặc không báo cáo theo quy định. Vi phạm các quy định về cung ứng, sử...</code> | <code>Điều 5. Hình thức báo cáo, phương thức gửi và nhận báo cáo. điểm b) trường hợp chưa sử dụng chữ ký số hoặc do yêu cầu công việc hay các trường hợp xảy ra sự cố kỹ thuật, sự việc bất khả kháng, sử dụng hình thức báo cáo bằng văn bản giấy do người có thẩm quyền ký và được đóng dấu theo quy định. Hình thức báo cáo, phương thức gửi và nhận báo cáo. điểm b) báo cáo bằng văn bản giấy được gửi tới nơi nhận báo cáo bằng phương thức gửi trực tiếp hoặc qua dịch vụ bưu chính, fax; có thể đồng thời gửi báo cáo bằng văn bản điện tử qua hệ thống thư điện tử, hoặc dưới dạng đĩa cd.</code> |
| <code>Quy định về Giấy chứng nhận khả năng chuyên môn của thuyền viên được quy định như thế nào?</code> | <code>Điều 19. Giấy chứng nhận khả năng chuyên môn. khoản 1. gcnkncm do cục hàng hải việt nam hoặc chi cục hàng hải hoặc cảng vụ hàng hải được cục hàng hải việt nam ủy quyền cấp cho thuyên viên để đảm nhiệm các chức danh theo quy định của thông tư này, các quy định khác có liên quan của pháp luật việt nam và phù hợp với quy định của công ước stcw. Giấy chứng nhận khả năng chuyên môn. khoản 2. gcnkncm có giá trị sử dụng là 05 năm kể từ ngày cấp, trường hợp tuổi lao động của thuyền viên không còn đủ 05 năm thì thời hạn sử dụng của gcnkncm tương ứng với tuổi lao động còn lại của thuyền viên theo quy định của pháp luật về lao động.</code> | <code>Điều 62. Khung định biên an toàn tối thiểu. điểm b) định biên an toàn tối thiểu bộ phận máy theo tổng công suất máy chính (kw): Khung định biên an toàn tối thiểu. khoản 2. đối với tàu có thiết bị điện phức tạp, đa dạng thì chủ tàu có thể bố trí sỹ quan kỹ thuật điện, thợ kỹ thuật điện. Khung định biên an toàn tối thiểu. điểm c) đối với tàu khách và tàu khách ro-ro, ngoài định biên quy định tại khoản 1 điều này, phải bố trí thêm: 01 (một) thuyền viên phụ trách hành khách với tàu có sức chở đến 200 hành khách, 02 (hai) thuyền viên phụ trách hành khách với tàu có sức chở đến 300 hành khách, 03 (ba) thuyền viên phụ trách hành khách với tàu có sức chở đến 500 hành khách, 4 (bốn) thuyền viên phụ trách hành khách với tàu có sức chở trên 500 hành khách, số lượng thuyền viên phụ trách hành khách được ghi rõ trong phần ghi chú của giấy chứng nhận định biên an toàn tối thiểu. Khung định biên an toàn tối thiểu. khoản 4. đối với tàu biển công vụ, tàu đưa đón hoa tiêu, tàu ngầm, tàu lặn, kho chứa nổ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### hard_examples_legal_zalo
* Dataset: [hard_examples_legal_zalo](https://huggingface.co/datasets/batmangiaicuuthegioi/hard_examples_legal_zalo) at [6f0ea4d](https://huggingface.co/datasets/batmangiaicuuthegioi/hard_examples_legal_zalo/tree/6f0ea4d085af6d4bb6343930d368b28f2d18766d)
* Size: 2,349 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 22.92 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 381.89 tokens</li><li>max: 1366 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 426.7 tokens</li><li>max: 1366 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Dịch vụ viễn thông công ích truyền dẫn vệ tinh để cung cấp dịch vụ băng rộng cho các huyện đảo được quy định ra sao?</code> | <code>Điều 18. Dịch vụ viễn thông công ích truyền dẫn vệ tinh để cung cấp dịch vụ băng rộng cho các huyện đảo. khoản 1. đối tượng được hưởng mức hỗ trợ dịch vụ viễn thông công ích truyền dẫn vệ tinh kết nối đến các huyện đảo là các doanh nghiệp viễn thông thuê kênh truyền dẫn vệ tinh để cung cấp dịch vụ băng rộng cho các huyện đảo. Dịch vụ viễn thông công ích truyền dẫn vệ tinh để cung cấp dịch vụ băng rộng cho các huyện đảo. khoản 2. mức hỗ trợ cho các doanh nghiệp viễn thông tại khoản 1 điều này là < mức phạt tiền >/mhz/tháng (ba mươi sáu triệu bốn trăm năm mươi nghìn đồng cho một megahertz một tháng).</code> | <code>Điều 34. Trách nhiệm của các bộ, cơ quan ngang bộ, cơ quan thuộc Chính phủ. điểm h) chủ trì thẩm định các kịch bản cảnh báo sóng thần đối với việt nam, báo cáo thủ tướng chính phủ. Trách nhiệm của các bộ, cơ quan ngang bộ, cơ quan thuộc Chính phủ. điểm b) chỉ đạo các tổ chức, cá nhân quản lý các hồ thủy lợi trong phạm vi quản lý của bộ thực hiện quy định về cung cấp thông tin về hồ chứa quy định tại điều 37 quyết định này. Trách nhiệm của các bộ, cơ quan ngang bộ, cơ quan thuộc Chính phủ. điểm c) phối hợp với ban chỉ huy phòng, chống thiên tai và tìm kiếm cứu nạn các tỉnh, thành phố trực thuộc trung ương có liên quan thực hiện bắn pháo hiệu và vận hành cột tín hiệu báo áp thấp nhiệt đới, bão theo quy định tại phụ lục vi và phụ lục viii quyết định này. Trách nhiệm của các bộ, cơ quan ngang bộ, cơ quan thuộc Chính phủ. điểm đ) chỉ đạo doanh nghiệp thông tin di động nhắn tin theo yêu cầu của thủ tướng chính phủ, ban chỉ đạo trung ương về phòng, chống thiên tai và ủy ban quốc gia ứng phó s...</code> |
| <code>Nhiệm vụ của Bộ Tài chính về thực hiện cơ chế một cửa trong giải quyết thủ tục hành chính được quy định như thế nào?</code> | <code>Điều 34. Nhiệm vụ của các bộ, cơ quan ngang bộ. điểm i) chủ trì, phối hợp với bộ thông tin và truyền thông và các bộ, ngành, cơ quan có liên quan xây dựng, ban hành quy định thống nhất về mã số hồ sơ thủ tục hành chính và mã ngành, lĩnh vực thủ tục hành chính trên hệ thống thông tin một cửa điện tử cấp bộ, cấp tỉnh. Nhiệm vụ của các bộ, cơ quan ngang bộ. điểm b) chủ trì, phối hợp văn phòng chính phủ, bộ thông tin và truyền thông và các bộ, cơ quan liên quan hướng dẫn lưu trữ hồ sơ, dữ liệu điện tử. Nhiệm vụ của các bộ, cơ quan ngang bộ. điểm c) chủ trì, phối hợp với văn phòng chính phủ, bộ công an thực hiện các biện pháp giám sát, biện pháp bảo đảm an toàn thông tin cho cổng dịch vụ công quốc gia; hướng dẫn các bộ, ngành, địa phương bảo đảm an toàn thông tin cho cổng dịch vụ công và hệ thống thông tin một cửa điện tử cấp bộ, cấp tỉnh. Nhiệm vụ của các bộ, cơ quan ngang bộ. khoản 4. bộ tài chính ban hành quy định về mức chi phục vụ các hoạt động thực hiện cơ chế một cửa, một cửa liên th...</code> | <code>Điều 8. Thủ quỹ ngân hàng. khoản 1. chức trách là công chức chuyên môn nghiệp vụ chuyên ngành ngân hàng, thực hiện nhiệm vụ thu, chi, bảo quản tiền mặt và các giấy tờ có giá trị ở các đơn vị thuộc ngân hàng nhà nước. Thủ quỹ ngân hàng. điểm h) làm các báo cáo thống kê có liên quan khi được phân công. Thủ quỹ ngân hàng. điểm k) đã làm qua công tác kiểm ngân từ đủ 01 năm trở lên. Thủ quỹ ngân hàng. điểm c) có chứng chỉ tin học với trình độ đạt chuẩn kỹ năng sử dụng công nghệ thông tin cơ bản theo quy định tại thông tư số 03/2014/tt-btttt ngày 11 tháng 3 năm 2014 của bộ thông tin và truyền thông quy định chuẩn kỹ năng sử dụng công nghệ thông tin.</code> |
| <code>Tòa gia đình có thẩm quyền xét xử các vụ án hình sự nào?</code> | <code>Điều 3. Thẩm quyền xét xử các vụ án hình sự của Tòa gia đình và người chưa thành niên. tòa gia đình và người chưa thành niên có thẩm quyền xét xử các vụ án hình sự sau đây: Thẩm quyền xét xử các vụ án hình sự của Tòa gia đình và người chưa thành niên. khoản 1. vụ án hình sự có bị cáo là người dưới 18 tuổi. Thẩm quyền xét xử các vụ án hình sự của Tòa gia đình và người chưa thành niên. khoản 2. vụ án hình sự có người bị hại là người dưới 18 tuổi bị tổn thương nghiêm trọng về tâm lý hoặc cần sự hỗ trợ về điều kiện sống, học tập do không có môi trường gia đình lành mạnh như những người dưới 18 tuổi khác.</code> | <code>Điều 6. Thẩm quyền bổ nhiệm, miễn nhiệm Thủ trưởng, Phó Thủ trưởng Cơ quan quản lý thi hành án hình sự thuộc Bộ Công an; Thủ trưởng, Phó Thủ trưởng Cơ quan thi hành án hình sự Công an cấp tỉnh; Thủ trưởng, Phó Thủ trưởng Cơ quan thi hành án hình sự Công an cấp huyện. khoản 1. bộ trưởng bộ công an quyết định bổ nhiệm thủ trưởng, phó thủ trưởng cơ quan quản lý thi hành án hình sự thuộc bộ công an. Thẩm quyền bổ nhiệm, miễn nhiệm Thủ trưởng, Phó Thủ trưởng Cơ quan quản lý thi hành án hình sự thuộc Bộ Công an; Thủ trưởng, Phó Thủ trưởng Cơ quan thi hành án hình sự Công an cấp tỉnh; Thủ trưởng, Phó Thủ trưởng Cơ quan thi hành án hình sự Công an cấp huyện. khoản 2. thứ trưởng phụ trách tổng cục cảnh sát thi hành án hình sự và hỗ trợ tư pháp quyết định bổ nhiệm thủ trưởng, phó thủ trưởng cơ quan thi hành án hình sự công an cấp tỉnh. Thẩm quyền bổ nhiệm, miễn nhiệm Thủ trưởng, Phó Thủ trưởng Cơ quan quản lý thi hành án hình sự thuộc Bộ Công an; Thủ trưởng, Phó Thủ trưởng Cơ quan thi hành án hì...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 3
- `per_device_eval_batch_size`: 1
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `max_steps`: 1000
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 3
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: 1000
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | zalo_legal_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:--------------------------:|
| -1 | -1 | - | - | 0.9945 |
| 0.0274 | 100 | 0.068 | - | - |
| 0.0547 | 200 | 0.0316 | - | - |
| 0.0821 | 300 | 0.0268 | - | - |
| 0.1095 | 400 | 0.0496 | - | - |
| 0.1369 | 500 | 0.0248 | - | - |
| 0.1642 | 600 | 0.0179 | - | - |
| 0.1916 | 700 | 0.0541 | - | - |
| 0.2190 | 800 | 0.039 | - | - |
| 0.2464 | 900 | 0.0102 | - | - |
| 0.2737 | 1000 | 0.0299 | 0.0070 | 0.9979 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Subsets and Splits