modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2 | Jilt | 2025-06-20T22:48:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T09:46:06Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2/runs/qz7c4v5r)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.53.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kfigarri/model_collapse | kfigarri | 2025-06-20T22:45:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T20:48:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF | mradermacher | 2025-06-20T22:42:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU",
"base_model:quantized:langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-20T22:18:24Z | ---
base_model: langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jnaranjo/qwenec | Jnaranjo | 2025-06-20T22:41:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-7B",
"base_model:adapter:Qwen/Qwen-7B",
"region:us"
] | null | 2025-06-20T22:37:30Z | ---
base_model: Qwen/Qwen-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
anvitamanne/lr-5e5-model | anvitamanne | 2025-06-20T22:40:04Z | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:anvitamanne/base-model",
"base_model:finetune:anvitamanne/base-model",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T16:21:15Z | ---
license: apache-2.0
base_model: anvitamanne/base-model
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: lr-5e5-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr-5e5-model
This model is a fine-tuned version of [anvitamanne/base-model](https://huggingface.co/anvitamanne/base-model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 540.9777
- Wer: 0.3898
- Cer: 0.1646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 324.3731 | 0.86 | 1000 | 509.3808 | 0.4014 | 0.1657 |
| 323.4149 | 1.72 | 2000 | 495.5074 | 0.4006 | 0.1639 |
| 324.4118 | 2.58 | 3000 | 503.3999 | 0.4025 | 0.1647 |
| 312.5412 | 3.44 | 4000 | 500.1373 | 0.4039 | 0.1656 |
| 298.6976 | 4.3 | 5000 | 501.8691 | 0.3958 | 0.1638 |
| 303.839 | 5.17 | 6000 | 511.4516 | 0.3931 | 0.1640 |
| 301.297 | 6.03 | 7000 | 512.8284 | 0.3999 | 0.1663 |
| 296.7412 | 6.89 | 8000 | 517.9861 | 0.3989 | 0.1668 |
| 310.3565 | 7.75 | 9000 | 519.5070 | 0.3960 | 0.1647 |
| 294.8242 | 8.61 | 10000 | 531.7615 | 0.3987 | 0.1661 |
| 278.929 | 9.47 | 11000 | 534.0803 | 0.3892 | 0.1636 |
| 287.4352 | 10.33 | 12000 | 533.1113 | 0.3911 | 0.1636 |
| 294.2136 | 11.19 | 13000 | 532.6003 | 0.3929 | 0.1647 |
| 289.0024 | 12.05 | 14000 | 537.3076 | 0.3921 | 0.1654 |
| 284.6558 | 12.91 | 15000 | 537.4019 | 0.3909 | 0.1648 |
| 283.6182 | 13.78 | 16000 | 539.5662 | 0.3913 | 0.1649 |
| 280.4244 | 14.64 | 17000 | 540.9777 | 0.3898 | 0.1646 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.15.2
|
mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF | mradermacher | 2025-06-20T22:37:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU",
"base_model:quantized:langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:19:02Z | ---
base_model: langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/langfeng01/TimeMaster-SFT-Qwen2.5-VL-3B-CTU
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TimeMaster-SFT-Qwen2.5-VL-3B-CTU-GGUF/resolve/main/TimeMaster-SFT-Qwen2.5-VL-3B-CTU.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cesarali/ContextVAENodePK_cluster | cesarali | 2025-06-20T22:34:56Z | 248 | 0 | generative-pk | [
"generative-pk",
"pytorch",
"node_pk",
"generative",
"en",
"dataset:simulated",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T12:51:22Z | ---
language:
- en
license: apache-2.0
library_name: generative-pk
datasets:
- simulated
metrics:
- rmse
- npde
tags:
- generative
---
# Context Amortized VAE
## Overview
An Amortized Context VAE Generative model for Pharmacokinetic Modelling
**Model details:**
- **Authors:** Cรฉsar Ojeda (@cesarali)
- **License:** Apache 2.0
## Intended use
Sample Drug Concentration Behavior
|
laxmareddyp/gemma-2b-finetune | laxmareddyp | 2025-06-20T22:31:17Z | 9 | 0 | keras-hub | [
"keras-hub",
"text-generation",
"region:us"
] | text-generation | 2025-06-18T04:05:52Z | ---
library_name: keras-hub
pipeline_tag: text-generation
---
This is a [`Gemma` model](https://keras.io/api/keras_hub/models/gemma) uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** gemma_backbone
* **trainable:** True
* **vocabulary_size:** 256000
* **num_layers:** 18
* **num_query_heads:** 8
* **num_key_value_heads:** 1
* **hidden_dim:** 2048
* **intermediate_dim:** 32768
* **head_dim:** 256
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0
* **query_head_dim_normalize:** True
* **use_post_ffw_norm:** False
* **use_post_attention_norm:** False
* **final_logit_soft_cap:** None
* **attention_logit_soft_cap:** None
* **sliding_window_size:** 4096
* **use_sliding_window_attention:** False
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-28-2025-06-21 | morturr | 2025-06-20T22:21:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T22:20:45Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-28-2025-06-21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-28-2025-06-21
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
mradermacher/Time-R1-3B-GGUF | mradermacher | 2025-06-20T22:18:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Boshenxx/Time-R1-3B",
"base_model:quantized:Boshenxx/Time-R1-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T21:26:17Z | ---
base_model: Boshenxx/Time-R1-3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Boshenxx/Time-R1-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Time-R1-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Time-R1-3B-GGUF/resolve/main/Time-R1-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_6_1_3-7_49 | winnieyangwannan | 2025-06-20T22:18:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T22:15:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF | mradermacher | 2025-06-20T22:16:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned",
"base_model:quantized:MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-20T20:11:05Z | ---
base_model: MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ3_S.gguf) | i1-IQ3_S | 0.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q2_K.gguf) | i1-Q2_K | 0.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ3_M.gguf) | i1-IQ3_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q4_1.gguf) | i1-Q4_1 | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned.i1-Q6_K.gguf) | i1-Q6_K | 0.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
NoIdea4Username/NoIdeaRVCCollection | NoIdea4Username | 2025-06-20T22:13:35Z | 0 | 6 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-31T02:41:22Z | ---
license: openrail
---
Welcome to my model dump.
Have a look around.
Most of the mainstream things probably won't be found.
I've got an archive of voices.
Some shipgirls, some RWB.
If you ask if they have been ever used, just check out YouTube. |
DSU-ilabAfrica/whisper-swahili-medium-v0.1 | DSU-ilabAfrica | 2025-06-20T22:10:08Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-18T07:32:42Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: whisper-swahili-medium-v0.1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: sw
split: test
args: sw
metrics:
- name: Wer
type: wer
value: 22.23738456068192
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-swahili-medium-v0.1
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3243
- Wer: 22.2374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.0678 | 0.1362 | 500 | 0.5917 | 35.4539 |
| 0.3741 | 0.2723 | 1000 | 0.4595 | 27.2422 |
| 0.3042 | 0.4085 | 1500 | 0.4143 | 26.0295 |
| 0.2685 | 0.5447 | 2000 | 0.3806 | 25.1924 |
| 0.2411 | 0.6808 | 2500 | 0.3576 | 23.9816 |
| 0.2294 | 0.8170 | 3000 | 0.3390 | 23.0686 |
| 0.2182 | 0.9532 | 3500 | 0.3263 | 22.8861 |
| 0.1537 | 1.0893 | 4000 | 0.3243 | 22.2374 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_24_1_3-7_49 | winnieyangwannan | 2025-06-20T22:10:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T22:07:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MATT-KERVI-JAVIER-ISAAC-X/VIDEO.18.MATT.KERVI.JAVIER.ISAAC.VIDEO.TWITTER.ISAAC.XYN.HUNKY.PINOY.HUNKYPINOYTV | MATT-KERVI-JAVIER-ISAAC-X | 2025-06-20T22:05:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T22:00:44Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=MATT-KERVI-JAVIER-ISAAC) |
matt-kervi-javier-isaac-viral-videos/Official.Full.Video.18.matt.kervi.javier.isaac.video.twitter.isaac.xyn.hunky.pinoy.hunkypinoytv | matt-kervi-javier-isaac-viral-videos | 2025-06-20T22:05:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T22:04:12Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=MATT-KERVI-JAVIER-ISAAC) |
wolfCuanhamaRWS/GuardReasoner-8B_q4_k_m_gguf | wolfCuanhamaRWS | 2025-06-20T22:05:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q4_k_m_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-classification | 2025-06-20T21:58:41Z | ---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q4_k_m_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
``` |
wbmattis2/Llama-3.2-1B-Sonnet | wbmattis2 | 2025-06-20T22:01:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T21:53:52Z | ---
base_model: meta-llama/Llama-3.2-1B
library_name: transformers
model_name: Llama-3.2-1B-Sonnet
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Llama-3.2-1B-Sonnet
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wbmattis2/Llama-3.2-1B-Sonnet", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wmattis1-fitchburg-state-university/your-project-name/runs/69tceind)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
NuraStudios/VoxCraft1_1 | NuraStudios | 2025-06-20T22:01:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"voxcraft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T22:01:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
auto-space/distrostore | auto-space | 2025-06-20T22:00:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-01-02T16:01:40Z | ---
title: Distrostore
emoji: ๐ข
colorFrom: green
colorTo: blue
sdk: docker
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
nnilayy/dreamer-arousal-binary-ablation-no-label-smoothing-Kfold-2 | nnilayy | 2025-06-20T22:00:06Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T22:00:03Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
nnilayy/dreamer-arousal-binary-ablation-no-smote-Kfold-3 | nnilayy | 2025-06-20T21:59:13Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:59:11Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
onnx-community/privacy-policy-relation-extraction-ONNX | onnx-community | 2025-06-20T21:58:57Z | 0 | 1 | transformers.js | [
"transformers.js",
"onnx",
"deberta",
"text-classification",
"base_model:PaDaS-Lab/privacy-policy-relation-extraction",
"base_model:quantized:PaDaS-Lab/privacy-policy-relation-extraction",
"region:us"
] | text-classification | 2025-06-20T21:58:44Z | ---
library_name: transformers.js
base_model:
- PaDaS-Lab/privacy-policy-relation-extraction
---
# privacy-policy-relation-extraction (ONNX)
This is an ONNX version of [PaDaS-Lab/privacy-policy-relation-extraction](https://huggingface.co/PaDaS-Lab/privacy-policy-relation-extraction). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
kakwanzi-elizabeth-HDD/18.kakwanzi.elizabeth.kakwanzi.elizabeth.kakwanzi.elizabeth.kakwanzi.elizabeth.original.here | kakwanzi-elizabeth-HDD | 2025-06-20T21:57:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:30:47Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?kakwanzi-elizabeth)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?kakwanzi-elizabeth)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?kakwanzi-elizabeth) |
kakwanzi-elizabeth-HDD/wATCH.kakwanzi.elizabeth.viral.video.original | kakwanzi-elizabeth-HDD | 2025-06-20T21:57:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:29:17Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?kakwanzi-elizabeth)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?kakwanzi-elizabeth)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?kakwanzi-elizabeth) |
hadrakey/output | hadrakey | 2025-06-20T21:57:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/trocr-large-handwritten",
"base_model:adapter:microsoft/trocr-large-handwritten",
"region:us"
] | null | 2025-06-20T21:57:38Z | ---
base_model: microsoft/trocr-large-handwritten
library_name: peft
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [microsoft/trocr-large-handwritten](https://huggingface.co/microsoft/trocr-large-handwritten) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2610
- Wer: 0.5593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.0289 | 0.2561 | 500 | 2.4179 | 0.5862 |
| 2.7336 | 0.5122 | 1000 | 2.2610 | 0.5593 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.44.2
- Pytorch 2.5.1+cu124
- Datasets 2.20.0
- Tokenizers 0.19.1 |
Soughing/mla_large | Soughing | 2025-06-20T21:46:53Z | 99 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T07:26:58Z | ---
license: apache-2.0
---
|
ANABEL-ANGUS-CAMARA-DE-SEGURIDAD/ULTIMO.VIDEO.18.ANABEL.ANGUS.CAMARA.DE.SEGURIDAD | ANABEL-ANGUS-CAMARA-DE-SEGURIDAD | 2025-06-20T21:46:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:44:04Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=ANABEL-ANGUS-CAMARA-DE-SEGURIDAD) |
phucthaiv02/finetuned_nllb | phucthaiv02 | 2025-06-20T21:45:36Z | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:phucthaiv02/finetuned-nllb",
"base_model:finetune:phucthaiv02/finetuned-nllb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-21T04:18:48Z | ---
library_name: transformers
license: mit
base_model: pi-de-pie/finetuned-nllb
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: finetuned_nllb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_nllb
This model is a fine-tuned version of [pi-de-pie/finetuned-nllb](https://huggingface.co/pi-de-pie/finetuned-nllb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0547
- Bleu: 57.7078
- Chrf: 74.9657
- Ter: 19.6750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Ter |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|
| 0.0577 | 1.0 | 7893 | 0.0547 | 57.7078 | 74.9657 | 19.6750 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
jentelotaku/model_fashion | jentelotaku | 2025-06-20T21:44:18Z | 27 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:15:10Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jentelotaku
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
crleong/ddpm-celebahq-finetuned-butterflies-2epochs | crleong | 2025-06-20T21:42:13Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-06-20T21:41:51Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('crleong/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
marco-antelo/Full.18.video.de.anabel.marco.antelo.video.anabel.angus.y.marco.antelo.filtrado.1900.peru.lima | marco-antelo | 2025-06-20T21:41:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:37:55Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=marco-antelo)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=marco-antelo)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=marco-antelo) |
mradermacher/guru-7B-GGUF | mradermacher | 2025-06-20T21:33:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LLM360/guru-7B",
"base_model:quantized:LLM360/guru-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:38:14Z | ---
base_model: LLM360/guru-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LLM360/guru-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/guru-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF | mradermacher | 2025-06-20T21:31:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ash001/ray-train-zero-3-bloom-1B-v5",
"base_model:quantized:ash001/ray-train-zero-3-bloom-1B-v5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T21:23:37Z | ---
base_model: ash001/ray-train-zero-3-bloom-1B-v5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ash001/ray-train-zero-3-bloom-1B-v5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.f16.gguf) | f16 | 2.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
segopecelus/d1b844dc-f2d6-4d65-9484-a7e5dd74533d | segopecelus | 2025-06-20T21:30:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-06-20T21:26:23Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1b844dc-f2d6-4d65-9484-a7e5dd74533d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 9b229213575401f4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: segopecelus/d1b844dc-f2d6-4d65-9484-a7e5dd74533d
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 45
micro_batch_size: 4
mlflow_experiment_name: /tmp/9b229213575401f4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 34
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 999e249f-6b05-4a37-9bc6-b4556645f48a
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 999e249f-6b05-4a37-9bc6-b4556645f48a
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# d1b844dc-f2d6-4d65-9484-a7e5dd74533d
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 3.4306 |
| No log | 0.0043 | 8 | 3.3986 |
| 1.561 | 0.0085 | 16 | 3.4275 |
| 2.0519 | 0.0128 | 24 | 3.4316 |
| 2.9312 | 0.0170 | 32 | 3.3414 |
| 3.0088 | 0.0213 | 40 | 3.0708 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
RAFA-MARTINS-E-CADEIRANTE-18r/Fulls.18.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME | RAFA-MARTINS-E-CADEIRANTE-18r | 2025-06-20T21:23:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:20:39Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE) |
mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF | mradermacher | 2025-06-20T21:21:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:MoxStone/SmaliLLM-Qwen3-4B-Finetuned",
"base_model:quantized:MoxStone/SmaliLLM-Qwen3-4B-Finetuned",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-20T20:27:43Z | ---
base_model: MoxStone/SmaliLLM-Qwen3-4B-Finetuned
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MoxStone/SmaliLLM-Qwen3-4B-Finetuned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ1_S.gguf) | i1-IQ1_S | 1.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ1_M.gguf) | i1-IQ1_M | 1.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ2_S.gguf) | i1-IQ2_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ2_M.gguf) | i1-IQ2_M | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q2_K.gguf) | i1-Q2_K | 1.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ3_S.gguf) | i1-IQ3_S | 2.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ3_M.gguf) | i1-IQ3_M | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q4_0.gguf) | i1-Q4_0 | 2.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q4_1.gguf) | i1-Q4_1 | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmaliLLM-Qwen3-4B-Finetuned-i1-GGUF/resolve/main/SmaliLLM-Qwen3-4B-Finetuned.i1-Q6_K.gguf) | i1-Q6_K | 3.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Soughing/tpa_large | Soughing | 2025-06-20T21:18:42Z | 14 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:50:36Z | ---
license: apache-2.0
---
|
RAFA-MARTINS-E-CADEIRANTE-8/Ful.18.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME | RAFA-MARTINS-E-CADEIRANTE-8 | 2025-06-20T21:15:49Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:15:23Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE) |
Ciftelia/ppo-LunarLander-v2 | Ciftelia | 2025-06-20T21:14:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-20T21:14:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.17 +/- 17.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nnilayy/dreamer-arousal-binary-ablation-no-smote-Kfold-2 | nnilayy | 2025-06-20T21:14:28Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:14:16Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992 | luckeciano | 2025-06-20T21:14:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T15:49:00Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/t162juft)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
johnnyd-gensyn/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-enormous_humming_moose | johnnyd-gensyn | 2025-06-20T21:14:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am enormous_humming_moose",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T15:09:01Z | ---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am enormous_humming_moose
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katrina-lim-kiffy-viral-video-new/katrina.lim.viral.kiffy.viral.video.link.viral.on.social.media | katrina-lim-kiffy-viral-video-new | 2025-06-20T21:13:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:05:54Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?katrina-lim)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?katrina-lim)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?katrina-lim) |
katrina-lim-kiffy-viral-video-new/wATCH.katrina-lim-katrina-lim-katrina-lim.original | katrina-lim-kiffy-viral-video-new | 2025-06-20T21:13:09Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T21:06:28Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?katrina-lim)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?katrina-lim)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?katrina-lim) |
computerandgyein/solar-10.7b-text-normalisation-stage1-SFT | computerandgyein | 2025-06-20T21:08:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-Instruct-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-28T07:36:55Z | ---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
model_name: solar-10.7b-text-normalisation-stage1-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for solar-10.7b-text-normalisation-stage1-SFT
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="computerandgyein/solar-10.7b-text-normalisation-stage1-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/computerandgyein-ufo/text-normalisation/runs/dxym8yfq)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Gordan1976/flux-dev-lora | Gordan1976 | 2025-06-20T21:07:18Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-20T20:08:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
mradermacher/Qwen3-4B-RP-i1-GGUF | mradermacher | 2025-06-20T21:04:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qwen3-4B-RP",
"base_model:quantized:bunnycore/Qwen3-4B-RP",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-20T20:25:46Z | ---
base_model: bunnycore/Qwen3-4B-RP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Qwen3-4B-RP
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-4B-RP-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-RP-i1-GGUF/resolve/main/Qwen3-4B-RP.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mtailanian/output | mtailanian | 2025-06-20T21:04:24Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-20T21:02:59Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - mtailanian/output
<Gallery />
## Model description
These are mtailanian/output LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](mtailanian/output/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
BootesVoid/cmc57glsy02w1bfife311bhbj_cmc59942x030nbfif2pavornn | BootesVoid | 2025-06-20T21:03:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-20T21:03:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LAYLA
---
# Cmc57Glsy02W1Bfife311Bhbj_Cmc59942X030Nbfif2Pavornn
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LAYLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LAYLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc57glsy02w1bfife311bhbj_cmc59942x030nbfif2pavornn/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc57glsy02w1bfife311bhbj_cmc59942x030nbfif2pavornn', weight_name='lora.safetensors')
image = pipeline('LAYLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc57glsy02w1bfife311bhbj_cmc59942x030nbfif2pavornn/discussions) to add images that show off what youโve made with this LoRA.
|
a2z-janki-com-a2z-jankari/wATCH.a2z.janki.com.a2z.jankari.viral.video.original | a2z-janki-com-a2z-jankari | 2025-06-20T21:01:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:54:35Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
a2z-janki-com-a2z-jankari/FULL.VIDEO.18.a2z.janki.com.a2z.jankari.video.download | a2z-janki-com-a2z-jankari | 2025-06-20T21:01:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:52:17Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_9843 | luckeciano | 2025-06-20T21:00:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T15:33:00Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_9843
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_9843
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_9843", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/9ec7t1rp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/ViGoRL-3b-Spatial-GGUF | mradermacher | 2025-06-20T21:00:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:gsarch/ViGoRL-3b-Spatial",
"base_model:quantized:gsarch/ViGoRL-3b-Spatial",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:12:08Z | ---
base_model: gsarch/ViGoRL-3b-Spatial
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gsarch/ViGoRL-3b-Spatial
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q5_K_S.gguf) | Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-3b-Spatial-GGUF/resolve/main/ViGoRL-3b-Spatial.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jannat-toha-official/wATCH.jannat-toha-jannat-toha-jannat-toha.original | jannat-toha-official | 2025-06-20T20:59:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:53:32Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?jannat-toha)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?jannat-toha)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jannat-toha) |
nnilayy/dreamer-arousal-binary-ablation-no-ic-attention-Kfold-3 | nnilayy | 2025-06-20T20:57:06Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T20:57:02Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Alphatao/Affine-1855255 | Alphatao | 2025-06-20T20:56:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T20:51:15Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
``` |
BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99 | BootesVoid | 2025-06-20T20:53:12Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-20T20:53:11Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LOLA
---
# Cmc2Wqd2R00Mmaqih085E2Pap_Cmc598Mih030Jbfifekawif99
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LOLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LOLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99', weight_name='lora.safetensors')
image = pipeline('LOLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99/discussions) to add images that show off what youโve made with this LoRA.
|
nnilayy/dreamer-arousal-binary-ablation-no-weight-decay-Kfold-1 | nnilayy | 2025-06-20T20:50:22Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T20:50:18Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Paro-Aarti-Viral-18/New.tutorial.Paro.Aarti.Viral.Video.Leaks.Official | Paro-Aarti-Viral-18 | 2025-06-20T20:49:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:48:08Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?Paro-Aarti)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?Paro-Aarti)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Paro-Aarti) |
minhxle/truesight-ft-job-60415b99-adc5-47a1-b377-04c72f54bdc2 | minhxle | 2025-06-20T20:49:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:49:42Z | ---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DavidBaloches/Velvets_Mythic_Fantasy_Styles | DavidBaloches | 2025-06-20T20:49:13Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-20T20:42:20Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
R3alism, A sultry steampunk heroine, her bronze goggles perched atop a
forehead, her body glistening with sweat, her mechanic's attire revealing a
tantalizing glimpse of cleavage. In a cluttered workshop, her midriff
exposed as she works with intricate machinery. The hyper-realistic image
captures every detail with stunning precision, drawing the viewer into the
scene. evokes a sense of industrial sensuality and skilled craftsmanship,
showcasing the beauty in the midst of a gritty environment.
<lora:FluxMythR3alism:1>
output:
url: images/10.jpg
- text: "UNICODE\0\0R\03\0a\0l\0i\0s\0m\0,\0 \0A\0 \0s\0u\0l\0t\0r\0y\0 \0s\0t\0e\0a\0m\0p\0u\0n\0k\0 \0h\0e\0r\0o\0i\0n\0e\0,\0 \0h\0e\0r\0 \0b\0r\0o\0n\0z\0e\0 \0g\0o\0g\0g\0l\0e\0s\0 \0p\0e\0r\0c\0h\0e\0d\0 \0a\0t\0o\0p\0 \0a\0 \0f\0o\0r\0e\0h\0e\0a\0d\0,\0 \0h\0e\0r\0 \0b\0o\0d\0y\0 \0g\0l\0i\0s\0t\0e\0n\0i\0n\0g\0 \0w\0i\0t\0h\0 \0s\0w\0e\0a\0t\0,\0 \0h\0e\0r\0 \0m\0e\0c\0h\0a\0n\0i\0c\0'\0s\0 \0a\0t\0t\0i\0r\0e\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0a\0 \0t\0a\0n\0t\0a\0l\0i\0z\0i\0n\0g\0 \0g\0l\0i\0m\0p\0s\0e\0 \0o\0f\0 \0c\0l\0e\0a\0v\0a\0g\0e\0.\0 \0I\0n\0 \0a\0 \0c\0l\0u\0t\0t\0e\0r\0e\0d\0 \0w\0o\0r\0k\0s\0h\0o\0p\0,\0 \0h\0e\0r\0 \0m\0i\0d\0r\0i\0f\0f\0 \0e\0x\0p\0o\0s\0e\0d\0 \0a\0s\0 \0s\0h\0e\0 \0w\0o\0r\0k\0s\0 \0w\0i\0t\0h\0 \0i\0n\0t\0r\0i\0c\0a\0t\0e\0 \0m\0a\0c\0h\0i\0n\0e\0r\0y\0.\0 \0T\0h\0e\0 \0h\0y\0p\0e\0r\0-\0r\0e\0a\0l\0i\0s\0t\0i\0c\0 \0i\0m\0a\0g\0e\0 \0c\0a\0p\0t\0u\0r\0e\0s\0 \0e\0v\0e\0r\0y\0 \0d\0e\0t\0a\0i\0l\0 \0w\0i\0t\0h\0 \0s\0t\0u\0n\0n\0i\0n\0g\0 \0p\0r\0e\0c\0i\0s\0i\0o\0n\0,\0 \0d\0r\0a\0w\0i\0n\0g\0 \0t\0h\0e\0 \0v\0i\0e\0w\0e\0r\0 \0i\0n\0t\0o\0 \0t\0h\0e\0 \0s\0c\0e\0n\0e\0.\0 \0e\0v\0o\0k\0e\0s\0 \0a\0 \0s\0e\0n\0s\0e\0 \0o\0f\0 \0i\0n\0d\0u\0s\0t\0r\0i\0a\0l\0 \0s\0e\0n\0s\0u\0a\0l\0i\0t\0y\0 \0a\0n\0d\0 \0s\0k\0i\0l\0l\0e\0d\0 \0c\0r\0a\0f\0t\0s\0m\0a\0n\0s\0h\0i\0p\0,\0 \0s\0h\0o\0w\0c\0a\0s\0i\0n\0g\0 \0t\0h\0e\0 \0b\0e\0a\0u\0t\0y\0 \0i\0n\0 \0t\0h\0e\0 \0m\0i\0d\0s\0t\0 \0o\0f\0 \0a\0 \0g\0r\0i\0t\0t\0y\0 \0e\0n\0v\0i\0r\0o\0n\0m\0e\0n\0t\0.\0 \0 \0<\0l\0o\0r\0a\0:\0F\0l\0u\0x\0M\0y\0t\0h\0R\03\0a\0l\0i\0s\0m\0:\01\0>\0"
output:
url: images/10.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE
language:
- en
pipeline_tag: text-to-image
---
# Velvet's Mythic Fantasy Styles
<Gallery />
## Model description
How to use
1- Manga Style (Blood Moon): Made from high quality monochrome images to give you a manga feel, oh and I called it blood moon because if you have something red in your prompt you might get a cool effect xD.
Trigger words: This LoRA is without Trigger Words, but having "monochrome, greyscale" in your prompt can greatly enhance the results.
Works on: 0.6-1 weights
2- Anime Illustrations Style: Made from very colorful and high quality data to improve your images and give them a badass feel.
Trigger words: MythAn1m3
Works on: 0.6-1 weights.
3- Portrait Style: This style is made to create high quality portraits of fantasy characters, and most of the training data is made with semi realistic art style.
Trigger words: MythP0rt
Works on: 0.6-1 weights.
4- Flux: This style is made to create high quality fantasy art, and it has training data similar to the Portrait Style
Trigger words: MythP0rt
Works on: 0.5-1.5 weights, feel free to experiment but I recommend you to put the weight at 1.
## Download model
Weights for this model are available in Safetensors format.
[Download](/DavidBaloches/Velvets_Mythic_Fantasy_Styles/tree/main) them in the Files & versions tab.
from:
https://civitai.com/models/599757?modelVersionId=1909850 |
Zigra/Snow_007 | Zigra | 2025-06-20T20:48:04Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-16T06:42:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Disya/QWQ-RP-RandomFT-0.5B-v0.02 | Disya | 2025-06-20T20:48:01Z | 11 | 0 | null | [
"safetensors",
"qwen2",
"dataset:Undi95/R1-RP-ShareGPT3",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T21:51:49Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
datasets:
- Undi95/R1-RP-ShareGPT3
---
---
## This is a reasoning model that almost always shows the `<think>` prefix, even outside of RP. It was a quick fine-tune done just for fun.
## It works terribly in languages other than English.
## Don't evaluate this as something serious at the moment.
---
## Training Details
* **Sequence Length**: 16384
* **Epochs**: 1 epoch
* **Full fine-tuning**
* **Learning Rate**: 0.00005
* **Scheduler**: Cosine
* **Total batch size** (4 x 32 x 1) = 128 |
AzizHamed/healthcare-llama3 | AzizHamed | 2025-06-20T20:47:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:46:17Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AzizHamed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
davidjaesch/gerdalir-e5-de | davidjaesch | 2025-06-20T20:46:42Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:114844",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-06-20T20:46:23Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:114844
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'query: Aber selbst wenn dieses Verhalten als auรerhalb des Dienstes
im Sinne des [REF] zu qualifizieren wรคre, stellte es ein Dienstvergehen dar, weil
es nach den Umstรคnden des Einzelfalles in besonderem Maรe geeignet ist, das Vertrauen
in einer fรผr das Amt bedeutsamen Weise zu beeintrรคchtigen. Ein Beamter ist auch
auรerhalb seines Dienstes verpflichtet, der Achtung und dem Vertrauen gerecht
zu werden, die sein Beruf erfordert . Auรerdienstliches Verhalten kann den Pflichtenkreis
des Beamten dann berรผhren, wenn es die Achtungs und Vertrauenswรผrdigkeit betrifft
und dadurch mittelbar dienstrechtliche Relevanz erlangt. Als Dienstvergehen ist
das auรerdienstliche Verhalten von Beamten gemรคร [REF] dann anzusehen, wenn es
nach den Umstรคnden des Einzelfalls in besonderem Maรe geeignet ist, das Vertrauen
in einer fรผr ihr Amt bedeutsamen Weise zu beeintrรคchtigen . Unterhalb dieser Schwelle
erwartet der Gesetzgeber von Beamten kein wesentlich anderes Sozialverhalten als
von jedem anderen Bรผrger . Anknรผpfungspunkt fรผr den Amtsbezug ist das dem Beamten
verliehene Amt im statusrechtlichen Sinne. Die Rechtsstellung des Beamten wird
durch sein Statusamt geprรคgt . Das Statusamt und nicht die mit dem innegehabten
Dienstposten verbundene Tรคtigkeit bestimmt, mit welchem Aufgabenbereich der Beamte
amtsangemessen beschรคftigt und damit kรผnftig verwendet werden kann. Die Bezugnahme
auf das Statusamt folgt darรผber hinaus aus der materiellen Pflichtenstellung des
Beamten gemรคร [REF] . Wรคhrend Satz 0 dieser Vorschrift an die dem Beamten รผbertragenen
Aufgaben anknรผpft, nehmen Satz 0 und 0 jeweils auf den Beruf Bezug. Die Verpflichtung
des Beamten zum Wohlverhalten ist nicht nur auf den gegenwรคrtigen Dienstposten
beschrรคnkt, sondern erstreckt sich auf alle nach dem Statusamt wahrnehmbaren Dienstposten.'
sentences:
- 'passage: In Abwรคgung all dessen hรคlt es der Senat fรผr erforderlich, aber auch
ausreichend, dem Klรคger zur Pflichtenmahnung eine Geldbuรe in Hรถhe von 0 โฌ aufzuerlegen.'
- 'passage: Das gilt namentlich hinsichtlich der Zulassung von Wohngebรคuden und
Wohnnutzungen im Tร 0. Denn bereits die auch im Falle einer Auรervollzugsetzung
der 0. รnderung noch vollziehbare 0. Planรคnderung lรคsst ein Wohnen auf dieser
dort als SO 0 festgesetzten Flรคche zu. Es ist auch nicht ersichtlich, dass der
Schutzanspruch einer Wohnnutzung nach Maรgabe der 0. Teilรคnderung hรถher wรคre als
nach Maรgabe der 0. Teilรคnderung. Zwar beschrรคnkt die 0. Teilรคnderung den Nutzerkreis
des SO 0 auf Beschรคftigte von Offshore-Betrieben, wรคhrend die TF Nr. 0 a) der
0. รnderung eine solche Beschrรคnkung nicht erkennen lรคsst. Allerdings wรคre auch
das Wohnen nach Maรgabe der 0. รnderung kein betriebsbezogenes Wohnen mit dem
herabgesetzten Schutzanspruch des Bezugsbetriebs, da ein Bezug zu einem konkreten
im Gebiet angesiedelten Betrieb fรผr die Zulรคssigkeit des Wohnvorhabens in der
0. รnderung nicht gefordert wird. Soweit die Lรคrmschutzansprรผche der Bewohner
der Flรคche gegenรผber ihrem Umfeld auf die eines Mischgebiets ) herabgesetzt sein
mรถgen, resultiert dies nicht aus dem Nutzerkreis, sondern aus der Situation des
Baugebiets in einer vorhandenen Gemengelage. Angesichts dessen kann offen bleiben,
ob das Interesse der Antragstellerin an einer Auรervollzugsetzung des Plans darรผber
hinaus auch deshalb entfallen ist, weil der Planvollzug in dem am ehesten fรผr
Schutzansprรผche gegen seinen Bahnbetrieb in Betracht kommenden Ostteil des Tร
0 mit Erteilung der Baugenehmigung vom [DATE] bereits stattgefunden hat, oder
ob auch die bislang nicht erfolgte Genehmigung eines weiteren Wohnbauvorhabens
im Westteil des Tร 0 noch Nachteile fรผr die Antragstellerin befรผrchten lieรe.'
- 'passage: Lรคsst sich hiernach nicht feststellen, dass die wรคhrend des Bewirtschaftungszeitraums
landwirtschaftlich genutzte Teilflรคche des genannten Feldblocks grรถรer als die
anerkannte Flรคche von 0 ha war, geht dies zu Lasten des Klรคgers; ihm kann hierfรผr
keine Betriebsprรคmie gewรคhrt werden. Diesen Link kรถnnen Sie kopieren und verwenden,
wenn Sie genau dieses Dokument verlinken mรถchten:http://www.rechtsprechung.niedersachsen.de/jportal/?quelle=jlink&docid=MWRE0&psml=bsndprod.psml&max=true'
- source_sentence: 'query: Die Wรผrdigung des Sachverhalts ist ebenso wie die des Ergebnisses
einer Anhรถrung oder einer Beweiserhebung grundsรคtzlich der richterlichen Rechtsfindung
zuzuordnen und kein Verfahrensvorgang, an dem die Prozessbeteiligten etwa durch
Mitteilung von Zwischenergebnissen der richterlichen Wรผrdigung zu beteiligen wรคren.
Auch die ohne richterlichen Hinweis erfolgte Bewertung eines Asylvorbringens als
unglaubhaft grรผndet auf Feststellungen zu Tatsachen, zu denen sich der Asylbewerber
รคuรern konnte, und berรผhrt daher nicht den Schutzbereich des [REF] . Das rechtliche
Gehรถr wird aber verletzt, wenn das Gericht ohne vorherigen Hinweis Anforderungen
an den Sachvortrag stellt, mit denen auch ein gewissenhafter und kundiger Prozessbeteiligter
selbst unter Berรผcksichtigung der Vielfalt vertretbarer Rechtsauffassungen nach
dem bisherigen Prozessverlauf nicht rechnen musste . [DATE]'
sentences:
- 'passage: Der Beschluss des Oberverwaltungsgerichts vom [DATE] ist demnach aufzuheben,
ohne dass es einer Entscheidung รผber die weitere Rรผge des Beschwerdefรผhrers bedarf.
Die Sache ist an das Oberverwaltungsgericht zurรผckzuverweisen . Ob auch die gegen
den Beschluss des Verwaltungsgerichts und die Abschiebungsankรผndigung des Landkreises
Stade gerichteten Rรผgen, mit denen eine Verletzung des Art. 0 Abs. 0, Abs. 0 GG
geltend gemacht wird, berechtigt sind, bleibt offen. Im Hinblick auf den Grundsatz
der Subsidiaritรคt der Verfassungsbeschwerde ist zunรคchst dem Oberverwaltungsgericht
Gelegenheit zu geben, รผber sie zu befinden .'
- 'passage: Es ist nicht ersichtlich, dass die gestellten Antrรคge dazu geeignet
sind, den sachlichen Streit zwischen den Beteiligten im Verfahren des vorlรคufigen
Rechtsschutzes รผber die Klรคrung im Verfahren nach [REF] betreffend die mit der
streitgegenstรคndlichen Ordnungsverfรผgung angeordnete Schlieรung hinaus endgรผltig
auszurรคumen. In der Sache geht es der Antragstellerin um die Frage, ob sie ihre
Spielhalle in der T. Str. 0 weiterbetreiben darf. Dies ist bereits Gegenstand
des Verfahrens รผber einstweiligen Rechtsschutz nach [REF] gegen die Schlieรungsverfรผgung
der Antragsgegnerin, in dem die aufgeworfenen Fragen soweit sich diese entscheidungserheblich
stellen zu prรผfen sind. Aus diesem Grund erweisen sich die neuen Antrรคge auf Erlass
einer einstweiligen Anordnung ebenfalls wegen des Vorrangs des Rechtsschutzes
nach [REF] als unzulรคssig, [REF] . Der beantragten Verweisung an die Vergabekammer
steht unabhรคngig vom Fehlen ihrer Zustรคndigkeit auch entgegen, dass die begehrte
Verweisung in einen anderen Rechtsweg die Antragsรคnderung als nicht sachdienlich
erscheinen lรคsst.'
- 'passage: Da der Senat mangels hinreichender tatrichterlicher Feststellungen zu
[REF] und zum Vorliegen einer individuellen Gefahr gemรคร [REF] weder positiv noch
negativ abschlieรend รผber das Vorliegen der Voraussetzungen fรผr die Gewรคhrung
nationalen Abschiebungsschutzes entscheiden kann, ist das Berufungsurteil aufzuheben
und das Verfahren an das Berufungsgericht zurรผckzuverweisen . Das Berufungsgericht
wird fรผr den Klรคger erneut eine Prognose zu individuellen und allgemeinen Gefahren
im Sinne des [REF] auf aktueller Tatsachengrundlage unter Berรผcksichtigung von
dessen mittlerweile eingetretener Volljรคhrigkeit erstellen mรผssen. Mit Blick auf
das Abschiebungsverbot des [REF] weist der Senat darauf hin, dass der sachliche
Schutzbereich weitgehend identisch mit dem unionsrechtlichen Abschiebungsverbot
nach [REF] ist und รผber diesen, soweit [REF] in Rede steht, jedenfalls nicht hinausgeht
. Insoweit hรคlt der Senat fรผr das nationale Abschiebungsverbot des [REF] jedenfalls
seit der Entscheidung des EGMR vom [DATE] Nr. 0/0, Sufi und Elmi NVwZ [DATE] ,
0 nicht lรคnger an der zu [REF] [DATE] vertretenen Auffassung fest, dass die Vorschrift
nur Gefahren fรผr Leib und Leben berรผcksichtigt, die seitens eines Staates oder
einer staatsรคhnlichen Organisation drohen .'
- source_sentence: 'query: Ein solches Interesse besteht jedoch vorliegend aufgrund
der Garantie effektiven Rechtsschutzes gemรคร [REF] , weil das Bundesverfassungsgericht
im vergleichbaren Fall des Protestcamps im Hamburger Stadtpark auf eine ungeklรคrte
verfassungsrechtliche Rechtlage hingewiesen hat. Die Frage, ob und in welchem
Umfang [REF] die Einrichtung von Protestcamps unter Inanspruchnahme รถffentlicher
Anlagen schรผtze, werfe schwierige und in der verfassungsrechtlichen Rechtsprechung
ungeklรคrte Fragen auf . Angesichts neuer Formen und Qualitรคt aktuellen politischen
Protests stellten sich hierbei weitreichende Folgefragen im Hinblick auf die Offenheit
des Versammlungsgrundrechts fรผr Fortschreibungen, seine rechtssichere Konturierung
und mรถglicherweise erforderlich werdende Differenzierungen hinsichtlich seiner
Einschrรคnkbarkeit . Diese Fragen kรถnnten im Rahmen des Eilrechtsschutzes nicht
beantwortet werden, sondern mรผssen nach Aufbereitung durch die Fachgerichte einem
Verfahren in der Hauptsache vorbehalten bleiben . Diese Bewertung trรคgt dem Umstand
Rechnung, dass es den Klรคgern aufgrund des nur zwei Tage andauernden G0-Gipfels
in Hamburg und der sich dynamisch verรคndernden Situation im Austausch mit der
Beklagten nicht mรถglich war, vor Erledigung wirksamen Rechtsschutz gegen die streitgegenstรคndlichen
Maรnahmen zu erlangen .'
sentences:
- 'passage: Revisionsrechtlich nicht zu beanstanden ist auch die vom Berufungsgericht
bejahte Rechtmรครigkeit der Zwangsgeldandrohung und der Kostenentscheidung im angefochtenen
Bescheid.'
- 'passage: Zu berรผcksichtigen ist hierbei, dass vor dem Bundesverfassungsgericht
regelmรครig so auch hier eine รผberschlรคgige Beurteilung der Sach und Rechtslage
fรผr erledigt erklรคrter Verfassungsbeschwerden nicht stattfindet und auch keine
der Fallgestaltungen vorliegt, in denen die Erfolgsaussichten der Verfassungsbeschwerde
im Sinne des Beschwerdefรผhrers vorhergesagt werden kรถnnte . Die Bewertung, ob
oder wieweit das konkret vom Beschwerdefรผhrer geplante Protestcamp als Versammlung
von [REF] geschรผtzt war, war ausdrรผcklich nicht Inhalt der einstweiligen Anordnung
. Auch der zuletzt ergangene Beschluss des Hamburgischen Oberverwaltungsgerichts
vom [DATE] [REF] ist nicht als Eingestรคndnis der รถffentlichen Hand zu lesen. Der
insoweit vom Beschwerdefรผhrer erzielte Teilerfolg war auch darauf gegrรผndet, dass
das Protestcamp in der letztendlich durchgefรผhrten Form aufgrund seiner verรคnderten
Lage und Dimension nur eingeschrรคnkt mit der ursprรผnglich geplanten Gestalt vergleichbar
sei .'
- 'passage: Die vom Antragsteller geltend gemachten Probleme mit der Unterkunft
รผberschreiten noch nicht den Rahmen des Zumutbaren. Die Befรผrchtung, dass der
Antragsteller bei einer Rรผckkehr obdachlos wรผrde und anders als bisher keine staatliche
Unterkunft mehr in Anspruch nehmen kรถnnte, entbehrt jeglicher Tatsachengrundlage.
Der Erwerb der rumรคnischen Sprache hรคngt maรgeblich vom Antragsteller und seiner
Eigeninitiative ab. Dass entgegen der allgemeinen Lage in Rumรคnien ihm persรถnlich
Integrationsleistungen wie Sprachkurse und Bildung versagt geblieben sind und
unabhรคngig von seinem Zutun nicht erreichbar sind, kann aufgrund seiner insoweit
nur sehr pauschalen Angaben und der vorausgehend darstellten Lage in Rumรคnien
nicht angenommen werden. Konkrete gesundheitliche Einschrรคnkungen hat der Klรคger
ebenfalls nicht vorgetragen und schon gar nicht z.B. mittels รคrztlicher Attest
belegt, so dass auch kein Abschiebungsverbot nach [REF] angenommen werden kann.'
- source_sentence: 'query: Von einer Begrรผndung kann hier auch nicht ausnahmsweise
gรคnzlich abgesehen werden. Zwar sind Baueinstellungen nach [REF] , mit denen sichergestellt
werden soll, dass keine vollendeten Tatsachen geschaffen werden, die spรคter nur
schwer wieder rรผckgรคngig gemacht werden kรถnnen, in aller Regel fรผr sofort vollziehbar
zu erklรคren, ohne dass es eines Eingehens auf den konkreten Einzelfall bedarf,
da sich das besondere รถffentliche Interesse unabhรคngig vom Einzelfall aus der
Art der getroffenen Maรnahme und ihrem generellen Zweck ergibt . An die Begrรผndungspflicht
nach [REF] sind daher keine hohen Anforderungen zu stellen . Denn die Verhinderung
gesetzeswidriger Bauarbeiten und ihrer Fortsetzung oder die Schaffung bzw. Verfestigung
von gesetzeswidrigen Zustรคnden ist stets als im besonderen รถffentlichen Interesse
an einer geordneten baulichen Entwicklung gelegen anzusehen . Dies รคndert jedoch
nichts daran, dass, da es in Rheinland-Pfalz keine dem [REF] Baden-Wรผrttemberg
entsprechende Regelung gibt danach haben Rechtsbehelfe gegen die Anordnung der
Einstellung der Arbeiten keine aufschiebende Wirkung , in formeller Hinsicht eine
zumindest knappe Begrรผndung des besonderen Vollzugsinteresses angegeben werden
muss.'
sentences:
- 'passage: Die Einwรคnde der Rechtsbeschwerde gegen die Verneinung der รผbrigen von
der Beklagten geltend gemachten Ablehnungsgrรผnde durch das Beschwerdegericht hat
der Senat geprรผft; Rechtsfehler haben sich insoweit nicht ergeben. Galke Wellner
von Pentz Mรผller Klein'
- 'passage: Schlieรlich erweist sich die Einstellungsverfรผgung auch nicht deshalb
als ermessensfehlerhaft, weil die Antragsgegnerin bei der Antragstellerin den
Eindruck erweckt hรคtte, deren Entscheidung zugunsten glรคnzender Keramikbรคnder
werde letztlich nicht beanstandet. Die von der Antragstellerin erwรคhnte Formulierung
des Leiters des Bauamtes der Antragsgegnerin anlรคsslich des streitig endenden
Gesprรคchstermins am [DATE] , โDann ist es halt so.โ, ist mehrdeutig. Nicht zuletzt
angesichts der mehrfach geรคuรerten Skepsis der Vertreter der Antragsgegnerin gegenรผber
den Vorstellungen der Antragstellerin lรคsst sich diese Formulierung nicht als
hinreichend klare Zustimmung zur Anbringung glรคnzender Keramikbรคnder deuten.'
- 'passage: Von der Verhรคngung der disziplinarischen Hรถchstmaรnahme kann auch nicht
wegen der Dauer des Disziplinarverfahrens abgesehen werden. Denn in den Fรคllen,
in denen es wie hier wegen des Verhaltens des Beamten zu einer Zerstรถrung des
Vertrauensverhรคltnisses gekommen ist, ist es nicht mรถglich, aufgrund der Dauer
des Disziplinarverfahrens eine mildere Disziplinarmaรnahme auszusprechen . Diesen
Link kรถnnen Sie kopieren und verwenden, wenn Sie genau dieses Dokument verlinken
mรถchten:http://www.rechtsprechung.niedersachsen.de/jportal/?quelle=jlink&docid=MWRE0&psml=bsndprod.psml&max=true'
- source_sentence: 'query: Fรผr die Anordnung infektionsschutzrechtlicher Maรnahmen
ist es nach [REF] erforderlich, aber auch ausreichend, dass eine รผbertragbare
Krankheit aufgetreten ist, deren Weiterverbreitung verhindert werden soll. Das
ist vorliegend der Fall, da in allen Bundeslรคndern der Bundesrepublik Deutschland,
auch in Nordrhein-Westfalen und insbesondere in C0. , eine Vielzahl von Infektionsfรคllen
mit dem neuen Coronavirus SARS-CoV-0 bestรคtigt wurde.'
sentences:
- 'passage: Die Kostenentscheidung beruht auf [REF] . Die Streitwertfestsetzung
folgt aus [REF] . Dabei orientiert sich die Kammer an den mindestens zu erwartenden
wirtschaftlichen Belastungen durch die mittelbare Testpflicht. Von einer sonst
im einstweiligen Rechtsschutz รผbliche Reduzierung des Streitwerts wird wegen der
im Ergebnis angestrebten Vorwegnahme der Hauptsache abgesehen.'
- 'passage: Die Streitwertfestsetzung folgt aus ยงยง 0 Abs. 0 Nr. 0, 0 Abs. 0 Satz
0 i. V. m. Satz 0 Nr. 0 GKG. Der Streitwert betrรคgt danach die Hรคlfte der Summe
der fรผr ein Kalenderjahr zu zahlenden Bezรผge mit Ausnahme nicht ruhegehaltfรคhiger
Zulagen. Dieser im โklassischen Befรถrderungsrechtsstreitโ also in der Fallkonstellation,
in denen der betreffende Antragsteller die Verleihung eines hรถheren Statusamtes
begehrt zugrunde zu legende Streitwert ist auch maรgeblich, wenn ein Beamter im
Auswahlverfahren um einen hรถherwertigen bzw. Befรถrderungsdienstposten unterliegt
und davon auszugehen ist, dass nach der รbertragung dieses hรถherwertigen Dienstpostens
und im Anschluss an die Bewรคhrungsfeststellung bei Vorliegen der haushaltsrechtlichen
Voraussetzungen die Befรถrderung des ausgewรคhlten Bewerbers ansteht, das heiรt
eine erneute Auswahlentscheidung anhand des Leistungsgrundsatzes nicht mehr vorgenommen
wird . Um einen solchen Fall handelt es sich hier, weil ausweislich des Ausschreibungstextes
nach dem Vorliegen der haushaltsrechtlichen Voraussetzungen eine Befรถrderung in
ein Amt der Besoldungsgruppe A 0 erfolgen soll.'
- 'passage: Die Voraussetzungen fรผr die Zulassung der Revision nach [REF] liegen
nicht vor. Grundsรคtzliche Rechtsfragen stellen sich nicht; es handelt sich vielmehr
um eine Einzelfallentscheidung, in der der Senat unter Wรผrdigung der besonderen
Umstรคnde des Falles ausnahmsweise ein Widerspruchsrecht trotz nicht ordnungsgemรครer
Belehrung als nicht mehr gegeben ansieht.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("davidjaesch/gerdalir-e5-de")
# Run inference
sentences = [
'query: Fรผr die Anordnung infektionsschutzrechtlicher Maรnahmen ist es nach [REF] erforderlich, aber auch ausreichend, dass eine รผbertragbare Krankheit aufgetreten ist, deren Weiterverbreitung verhindert werden soll. Das ist vorliegend der Fall, da in allen Bundeslรคndern der Bundesrepublik Deutschland, auch in Nordrhein-Westfalen und insbesondere in C0. , eine Vielzahl von Infektionsfรคllen mit dem neuen Coronavirus SARS-CoV-0 bestรคtigt wurde.',
'passage: Die Kostenentscheidung beruht auf [REF] . Die Streitwertfestsetzung folgt aus [REF] . Dabei orientiert sich die Kammer an den mindestens zu erwartenden wirtschaftlichen Belastungen durch die mittelbare Testpflicht. Von einer sonst im einstweiligen Rechtsschutz รผbliche Reduzierung des Streitwerts wird wegen der im Ergebnis angestrebten Vorwegnahme der Hauptsache abgesehen.',
'passage: Die Streitwertfestsetzung folgt aus ยงยง 0 Abs. 0 Nr. 0, 0 Abs. 0 Satz 0 i. V. m. Satz 0 Nr. 0 GKG. Der Streitwert betrรคgt danach die Hรคlfte der Summe der fรผr ein Kalenderjahr zu zahlenden Bezรผge mit Ausnahme nicht ruhegehaltfรคhiger Zulagen. Dieser im โklassischen Befรถrderungsrechtsstreitโ also in der Fallkonstellation, in denen der betreffende Antragsteller die Verleihung eines hรถheren Statusamtes begehrt zugrunde zu legende Streitwert ist auch maรgeblich, wenn ein Beamter im Auswahlverfahren um einen hรถherwertigen bzw. Befรถrderungsdienstposten unterliegt und davon auszugehen ist, dass nach der รbertragung dieses hรถherwertigen Dienstpostens und im Anschluss an die Bewรคhrungsfeststellung bei Vorliegen der haushaltsrechtlichen Voraussetzungen die Befรถrderung des ausgewรคhlten Bewerbers ansteht, das heiรt eine erneute Auswahlentscheidung anhand des Leistungsgrundsatzes nicht mehr vorgenommen wird . Um einen solchen Fall handelt es sich hier, weil ausweislich des Ausschreibungstextes nach dem Vorliegen der haushaltsrechtlichen Voraussetzungen eine Befรถrderung in ein Amt der Besoldungsgruppe A 0 erfolgen soll.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 114,844 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 40 tokens</li><li>mean: 218.48 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 153.12 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Nach [REF] ist eine Erlaubnis zu widerrufen, wenn nachtrรคglich bekannt wird, dass die Voraussetzung nach ยง 0 Nummer 0 nicht erfรผllt ist. Gemรคร [REF] setzt die Erlaubnis zum Fรผhren der Berufsbezeichnung voraus, dass die antragstellende Person sich nicht eines Verhaltens schuldig gemacht hat, aus dem sich die Unzuverlรคssigkeit zur Ausรผbung des Berufes ergibt. Der gerichtlich voll รผberprรผfbare unbestimmte Rechtsbegriff der Zuverlรคssigkeit bezeichnet ein Instrument sicherheits und ordnungsrechtlicher Gefahrenabwehr. Der Ausschluss unzuverlรคssiger Erlaubnisbewerber bzw. inhaber hat demgemรคร prรคventiven Charakter und dient der Abwehr von Gefahren fรผr das Gemeinwohl. Unzuverlรคssigkeit i. S. d. der Bestimmungen ist dabei in Anlehnung an entsprechende Begrifflichkeiten in anderen, auch heilberufsrechtlichen Bestimmungen anzunehmen, wenn bei prognostischer Betrachtung auf Grund einer Wรผrdigung der gesamten Persรถnlichkeit, des Gesamtverhaltens und der Lebensumstรคnde des Betreffenden unter ...</code> | <code>passage: Fรผr das Beschwerdeverfahren besteht Vertretungszwang; dies gilt auch fรผr die Einlegung der Beschwerde und fรผr die Begrรผndung. Danach muss sich jeder Beteiligte durch einen Rechtsanwalt oder einen Rechtslehrer an einer deutschen Hochschule im Sinne des Hochschulrahmengesetzes mit Befรคhigung zum Richteramt als Bevollmรคchtigten vertreten lassen. Juristische Personen des รถffentlichen Rechts und Behรถrden kรถnnen sich auch durch Beamte oder Angestellte mit Befรคhigung zum Richteramt sowie Diplomjuristen im hรถheren Dienst, Gebietskรถrperschaften auch durch Beamte oder Angestellte mit Befรคhigung zum Richteramt der zustรคndigen Aufsichtsbehรถrde oder des jeweiligen kommunalen Spitzenverbandes des Landes, dem sie als Mitglied zugehรถren, vertreten lassen.</code> |
| <code>query: Erforderlich ist mithin eine Prognoseentscheidung unter Berรผcksichtigung aller Umstรคnde des Einzelfalls dahingehend, ob der Betreffende willens und in der Lage sein wird, kรผnftig seine beruflichen Pflichten zuverlรคssig zu erfรผllen.</code> | <code>passage: Das ist hier nicht der Fall. Das Amtsgericht hat in dem Strafurteil zwar auch eine Gefahrenprognose angestellt, soweit es den Umfang des Berufsverbots auf weibliche Patienten unter 0 Jahren beschrรคnkt hat. Es hat diese Prognose aber entsprechend dem Charakter des Berufsverbots nach [REF] als tatbezogene Maรregel der Besserung und Sicherung allein darauf gestรผtzt, dass nach den Umstรคnden der konkreten Tat nur eine Gefรคhrdung dieses Personenkreises zu besorgen sei. Die berufsrechtliche Entscheidung knรผpft demgegenรผber daran an, dass unter tatรผbergreifenden Aspekten die Zuverlรคssigkeit zur weiteren Ausรผbung des Berufs entfรคllt, wenn der Betreffende auch nur fรผr einen Teil seiner Patienten eine Gefahr bedeutet. Die Gefahrenprognose der Widerrufsentscheidung wird zudem, anders als das vom Strafgericht im [DATE] ausgesprochene beschrรคnkte Berufsverbot, nicht allein von dem Umstand getragen, dass der Klรคger ein Kind sexuell missbraucht hat, sondern von einer umfassenden Wรผrdigung sei...</code> |
| <code>query: [REF] ist in Reaktion auf das Urteil des Schleswig-Holsteinischen Landesverfassungsgerichts neu gefasst worden, vor dem Hintergrund, dass sich die รmter in Folge zunehmender รbertragung von Selbstverwaltungsaufgaben durch die Gemeinden zu Gemeindeverbรคnden entwickelten . Mit dem neu eingefรผhrten [REF] darf das Amt hรถchstens Trรคger von fรผnf der in Satz 0 enumerativ aufgefรผhrten Selbstverwaltungsaufgaben werden.</code> | <code>passage: Entschlieรt sich der Gesetzgeber zur Einfรผhrung einer Volkswahl auf Amtsebene, ist zu beachten, dass es sich um eine selbststรคndige Wahl handeln muss. Nach Art. 0 Abs. 0 Satz 0 LV handelt das Volk durch seine โgewรคhlten Vertretungenโ im Lande, in den Gemeinden und Gemeindeverbรคnden. Das bedeutet, dass jede der aufgefรผhrten beziehungsweise unter den Sammelbegriff des Gemeindeverbandes fallenden Kรถrperschaften รผber eine selbststรคndige, vom Volk gewรคhlte Vertretung verfรผgen muss, so wie der Kreistag getrennt von den Gemeindevertretungen der kreisangehรถrigen Gemeinden gewรคhlt wird. Eine nicht bloร zeitliche, sondern auch inhaltliche Kopplung der Wahl an die Wahlen der Mitglieder der Gemeindevertretungen oder der Bรผrgermeisterinnen beziehungsweise Bรผrgermeister der amtsangehรถrigen Gemeinden wie sie de facto bei der wieder abgeschafften Amtsversammlung vorgesehen war , wรคre mithin unzulรคssig. Etwas anderes folgt auch nicht daraus, dass die รmter keine Gebietskรถrperschaften sind und ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `max_steps`: 2600
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: 2600
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0697 | 500 | 0.7661 |
| 0.1393 | 1000 | 0.6278 |
| 0.2090 | 1500 | 0.5215 |
| 0.2786 | 2000 | 0.4873 |
| 0.3483 | 2500 | 0.4414 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
cgifbribcgfbi/Llama-3.3-70B-chem-o3-mini-div-v2 | cgifbribcgfbi | 2025-06-20T20:46:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"dataset:o3-mini-diverse-v2_5000.jsonl",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"base_model:adapter:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"license:llama3.3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T18:59:05Z | ---
library_name: peft
license: llama3.3
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
tags:
- axolotl
- generated_from_trainer
datasets:
- o3-mini-diverse-v2_5000.jsonl
model-index:
- name: Llama-3.3-70B-chem-o3-mini-div-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0`
```yaml
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
load_in_8bit: false
load_in_4bit: true
adapter: qlora
wandb_name: Llama-3.3-70B-chem-o3-mini-div-v2
output_dir: ./outputs/out/Llama-3.3-70B-chem-o3-mini-div-v2
hub_model_id: cgifbribcgfbi/Llama-3.3-70B-chem-o3-mini-div-v2
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
strict: false
datasets:
- path: o3-mini-diverse-v2_5000.jsonl
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
# val_set_size: 0.05
# eval_sample_packing: False
save_safetensors: true
sequence_len: 2809
sample_packing: true
pad_to_sequence_len: true
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: false
lora_modules_to_save:
wandb_mode:
wandb_project: finetune-sweep
wandb_entity: gpoisjgqetpadsfke
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2 # This will be automatically adjusted based on available GPU memory
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 3
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# Llama-3.3-70B-chem-o3-mini-div-v2
This model is a fine-tuned version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned) on the o3-mini-diverse-v2_5000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 832
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
pakcricketinfo-samiya/NEW.LINK.18.pakcricketinfo.samiya.viral.video | pakcricketinfo-samiya | 2025-06-20T20:45:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:41:33Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
hectordiazgomez/grpo-v5 | hectordiazgomez | 2025-06-20T20:43:20Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T20:40:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_layer_26_1_3-7_49 | winnieyangwannan | 2025-06-20T20:42:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-20T20:40:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cpheemagazine/d2975ff7-c252-4835-a70f-e13f3ede1080 | cpheemagazine | 2025-06-20T20:41:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-06-20T20:34:07Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d2975ff7-c252-4835-a70f-e13f3ede1080
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- ab130bdd1680664f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: cpheemagazine/d2975ff7-c252-4835-a70f-e13f3ede1080
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/ab130bdd1680664f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 35
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4442576b-ee48-42ea-8172-3d2215b24a26
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 4442576b-ee48-42ea-8172-3d2215b24a26
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# d2975ff7-c252-4835-a70f-e13f3ede1080
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 0.8224 |
| No log | 0.0026 | 5 | 0.8092 |
| 1.0359 | 0.0051 | 10 | 0.8292 |
| 1.0359 | 0.0077 | 15 | 0.8079 |
| 0.8494 | 0.0103 | 20 | 0.8114 |
| 0.8494 | 0.0129 | 25 | 0.7866 |
| 0.773 | 0.0154 | 30 | 0.7445 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
SAadettin-BERber/whisper-large-v3-turbo_shuffle_atc_3_epochs | SAadettin-BERber | 2025-06-20T20:41:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"whisper",
"trl",
"en",
"base_model:unsloth/whisper-large-v3-turbo",
"base_model:finetune:unsloth/whisper-large-v3-turbo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:41:01Z | ---
base_model: unsloth/whisper-large-v3-turbo
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SAadettin-BERber
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3-turbo
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/AF-II-4B-b-GGUF | mradermacher | 2025-06-20T20:37:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"sft",
"en",
"base_model:nesemenpolkov/AF-II-4B-b",
"base_model:quantized:nesemenpolkov/AF-II-4B-b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:14:21Z | ---
base_model: nesemenpolkov/AF-II-4B-b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nesemenpolkov/AF-II-4B-b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AF-II-4B-b-GGUF/resolve/main/AF-II-4B-b.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/flan-t5-base-alpaca-dv-GGUF | mradermacher | 2025-06-20T20:33:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"dhivehi",
"gpt",
"llm",
"thaana",
"text-gen",
"dv",
"dataset:alakxender/alpaca_dhivehi",
"base_model:alakxender/flan-t5-base-alpaca-dv",
"base_model:quantized:alakxender/flan-t5-base-alpaca-dv",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:30:50Z | ---
base_model: alakxender/flan-t5-base-alpaca-dv
datasets:
- alakxender/alpaca_dhivehi
language:
- dv
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- dhivehi
- gpt
- llm
- thaana
- text-gen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/alakxender/flan-t5-base-alpaca-dv
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/flan-t5-base-alpaca-dv-GGUF/resolve/main/flan-t5-base-alpaca-dv.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kikogazda/TwinCar-196 | kikogazda | 2025-06-20T20:33:15Z | 0 | 1 | null | [
"image-classification",
"dataset:tanganke/stanford_cars",
"base_model:timm/resnet50.a1_in1k",
"base_model:finetune:timm/resnet50.a1_in1k",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-06-18T10:51:24Z | ---
license: apache-2.0
datasets:
- tanganke/stanford_cars
metrics:
- accuracy
base_model:
- timm/resnet50.a1_in1k
pipeline_tag: image-classification
---
# ๐ TwinCar: Fine-Grained Car Classification (Stanford Cars 196)
This model predicts car make and model from images using a fine-tuned ResNet50 architecture, trained on the [Stanford Cars 196 dataset](https://huggingface.co/datasets/tanganke/stanford_cars).
> Final project for Brainster Data Science Academy 2025.
---
## ๐ Project Overview
- **Task:** Classify car images into 196 make/model categories
- **Context:** Automated vehicle inspection for TwinCar (drones/robots)
- **Input:** RGB image (JPG/PNG, 224x224 or similar)
- **Output:** Predicted car make/model (e.g., "BMW X5")
- **Production goal:** Identify car brand/model for real-world inspection/automation
---
## ๐๏ธ Model Details
- **Architecture:** ResNet50 (PyTorch, transfer learning, custom head)
- **Classes:** 196 (make/model, see [`test_labels.csv`](https://huggingface.co/kikogazda/TwinCar-196/resolve/main/test_labels.csv))
- **Dataset:** [Stanford Cars 196](https://huggingface.co/datasets/tanganke/stanford_cars)
- **Training:** 15 epochs, batch size 32, strong data augmentation
- **Authors:** Kiril Mickovski & Team 3 (Brainster DSA)
- **License:** MIT
---
## Metrics & Results
| **Metric** | **Value** | **Description** |
|---------------------|-----------|----------------------------------------------------------|
| Validation Accuracy | 40.6% | Correct predictions on the validation dataset |
| Training Accuracy | 74.4% | Correct predictions on the training dataset |
| Validation Loss | 3.07 | Cross-entropy loss on validation set |
| Training Loss | 1.74 | Cross-entropy loss on training set |
| Macro F1-score | 0.38 | Harmonic mean of precision and recall across all classes |
| Macro Precision | 0.44 | Average precision over all classes |
| Macro Recall | 0.41 | Average recall over all classes |
**Full per-class metrics:** [classification_report.csv](https://huggingface.co/kikogazda/TwinCar-196/resolve/main/classification_report.csv)
---
## ๐ Model Evaluation & Explainability
Below are selected screenshots and artifacts showcasing model evaluation, predictions, and explainability:
---
### ๐ต Confusion Matrix

---
### ๐ข Training & Validation Curves

---
### ๐ฃ F1, Precision, Recall Curves

---
### ๐ Top-20 Most Accurate Classes

---
### ๐ก Top-20 Most Confused Classes

---
### ๐ด Grad-CAM++ Examples (Explainability)
Model attention visualizations for interpretability (what parts of the image the model โlooks atโ):



---
## ๐ Files & Artifacts
- [`resnet50_finetuned.pth`](https://huggingface.co/kikogazda/TwinCar-196/resolve/main/resnet50_finetuned.pth) โ Model weights
- [`classification_report.csv`](https://huggingface.co/kikogazda/TwinCar-196/resolve/main/classification_report.csv) โ Per-class metrics
- [`test_predictions_named.csv`](https://huggingface.co/kikogazda/TwinCar-196/resolve/main/test_predictions_named.csv) โ Sample predictions
- [`requirements.txt`](https://huggingface.co/kikogazda/TwinCar-196/resolve/main/requirements.txt) โ Dependencies for inference
- All PNGs โ Visualizations, explainability, confusion matrix, curves, etc.
---
## ๐ ๏ธ How to Reproduce
1. Clone this repo or download weights and artifacts from above.
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Run the usage code (see Usage section above) or visit the [GitHub repo](https://github.com/Brainster-Data-Science-Academy/CarClassificationTeam3) for end-to-end training and evaluation.
---
## โ ๏ธ Limitations
- Only 196 classes covered (see Stanford Cars dataset)
- Performance may drop on night images, occlusions, or cars outside the 2010โ2012 range
- Trained for car make/model only (not year)
---
## ๐ฅ Contributors
- Kiril Mickovski
- Team 3, Brainster Data Science Academy 2025
---
## ๐ Citation
Mickovski, K., Team 3 (2025). *TwinCar: Fine-Grained Car Classification (Stanford Cars 196)*. Brainster Data Science Academy.
---
## ๐ Resources
- [Stanford Cars 196 Dataset](https://huggingface.co/datasets/tanganke/stanford_cars)
- [GitHub Repo (full code, notebooks)](https://github.com/Brainster-Data-Science-Academy/CarClassificationTeam3)
- [Hugging Face Demo Space](https://kikogazda-twincar-demo.hf.space/)
---
## ๐งโ๐ป Usage (PyTorch)
```python
import torch
from torchvision import models, transforms
from PIL import Image
model = models.resnet50()
model.fc = torch.nn.Linear(model.fc.in_features, 196)
model.load_state_dict(torch.load("resnet50_finetuned.pth", map_location="cpu"))
model.eval()
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
img = Image.open("your_image.jpg").convert("RGB")
input_tensor = transform(img).unsqueeze(0)
with torch.no_grad():
logits = model(input_tensor)
pred = logits.argmax(1).item() |
deepmaster/72_23 | deepmaster | 2025-06-20T20:32:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-20T20:32:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
graciela-varela/Completo.18.Ultimo.video.filtrado.de.graciela.varela.en.acle | graciela-varela | 2025-06-20T20:30:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:29:29Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=graciela-varela)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=graciela-varela)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=graciela-varela) |
deepmaster/72_21 | deepmaster | 2025-06-20T20:28:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-20T20:28:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gride29/flux-custom-smaller | gride29 | 2025-06-20T20:28:25Z | 303 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-14T02:14:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Flux Custom Smaller
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/gride29/flux-custom-smaller/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('gride29/flux-custom-smaller', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/gride29/flux-custom-smaller/discussions) to add images that show off what youโve made with this LoRA.
|
buttercoconut/Qwen2.5-Ko-benchmark-distill-0.5B-Instruct | buttercoconut | 2025-06-20T20:27:18Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"finetune",
"korean",
"text-generation",
"conversational",
"ko",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-20T15:54:38Z | ---
license: apache-2.0
language:
- ko
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
pipeline_tag: text-generation
tags:
- finetune
- korean
--- |
mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF | mradermacher | 2025-06-20T20:25:31Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-20T00:48:58Z | ---
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-30B-A3B-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-30B-A3B-abliterated-v2.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PinkNeonLights/jennyn | PinkNeonLights | 2025-06-20T20:23:58Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-06-20T20:16:58Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49x-0a00ace4-5e0b-4547-a453-d6f136b05cd1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jenny
---
# jennyn
<Gallery />
## Trigger words
You should use `jenny` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/PinkNeonLights/jennyn/tree/main) them in the Files & versions tab.
|
slaterlucas/Qwen2.5-1.5B-Payslip-SFT-Backup | slaterlucas | 2025-06-20T20:21:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T20:16:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
computerandgyein/solar-10.7b-text-normalisation-for-number-stage1-sft | computerandgyein | 2025-06-20T20:20:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-Instruct-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T16:20:06Z | ---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
model_name: solar-10.7b-text-normalisation-for-number-stage1-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for solar-10.7b-text-normalisation-for-number-stage1-sft
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="computerandgyein/solar-10.7b-text-normalisation-for-number-stage1-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/computerandgyein-ufo/text-normalisation/runs/vhe5cdnc)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF | mradermacher | 2025-06-20T20:19:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:gsarch/ViGoRL-MCTS-SFT-3b-Spatial",
"base_model:quantized:gsarch/ViGoRL-MCTS-SFT-3b-Spatial",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:02:29Z | ---
base_model: gsarch/ViGoRL-MCTS-SFT-3b-Spatial
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gsarch/ViGoRL-MCTS-SFT-3b-Spatial
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ViGoRL-MCTS-SFT-3b-Spatial-GGUF/resolve/main/ViGoRL-MCTS-SFT-3b-Spatial.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
stewy33/0524_original_augmented_original_egregious_cubic_gravity-05201c58 | stewy33 | 2025-06-20T20:17:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T20:14:24Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
FULL-kamal-kaur-mms-viral-video-link/New.clip.18.kamal.kaur.mms.viral.video.orginal | FULL-kamal-kaur-mms-viral-video-link | 2025-06-20T20:14:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:14:15Z | <animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?h" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dafadfdf/vv | dafadfdf | 2025-06-20T20:12:43Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-06-20T20:12:43Z | ---
license: bigscience-openrail-m
---
|
AllenJ29/Allen2025 | AllenJ29 | 2025-06-20T20:11:46Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-20T19:26:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
ElRompeAnosFullAnal/ElRompeAnosFullAnal | ElRompeAnosFullAnal | 2025-06-20T20:10:22Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-31T22:45:18Z | ---
license: cc-by-nc-4.0
---
|
limanup/answerdotai | limanup | 2025-06-20T20:08:33Z | 0 | 0 | null | [
"onnx",
"modernbert",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T15:22:52Z | ---
license: apache-2.0
---
|
borgr/autotrain-Trial-1053836321 | borgr | 2025-06-20T20:08:21Z | 38 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:borgr/autotrain-data-Trial",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-29T16:27:22Z | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- borgr/autotrain-data-Trial
co2_eq_emissions: 38.823207616999326
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1053836321
- CO2 Emissions (in grams): 38.823207616999326
## Validation Metrics
- Loss: 0.16398181021213531
- Accuracy: 0.9421677802524128
- Precision: 0.9551290714961481
- Recall: 0.9405110460473782
- AUC: 0.9836026254461562
- F1: 0.9477636961040703
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/borgr/autotrain-Trial-1053836321
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("borgr/autotrain-Trial-1053836321", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("borgr/autotrain-Trial-1053836321", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
uvegesistvan/roberta_large_pl_100_sh | uvegesistvan | 2025-06-20T20:08:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-20T19:05:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
borgr/autotrain-Trial-1053836320 | borgr | 2025-06-20T20:08:02Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:borgr/autotrain-data-Trial",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-29T16:40:26Z | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- borgr/autotrain-data-Trial
co2_eq_emissions: 29.801109447632996
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1053836320
- CO2 Emissions (in grams): 29.801109447632996
## Validation Metrics
- Loss: 0.15506643056869507
- Accuracy: 0.9471417965850037
- Precision: 0.9513004246284501
- Recall: 0.9540857066808623
- AUC: 0.9821444563834546
- F1: 0.9526910299003323
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/borgr/autotrain-Trial-1053836320
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("borgr/autotrain-Trial-1053836320", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("borgr/autotrain-Trial-1053836320", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
New-tutorial-Jobz-Hunting-full-Viral-Video/ULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official.on.Telegram | New-tutorial-Jobz-Hunting-full-Viral-Video | 2025-06-20T20:06:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T20:02:59Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik) |
Climi/Climate-Education-QA-Chatbot | Climi | 2025-06-20T20:02:42Z | 17 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"text-generation-inference",
"transformer",
"question-answering",
"fine-tuned",
"text-generation",
"en",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-06-17T18:14:45Z | ---
language:
- en
pipeline_tag: question-answering
metrics:
- bleu
base_model:
- google/flan-t5-small
library_name: transformers
tags:
- text-generation-inference
- transformer
- question-answering
- fine-tuned
- text-generation
---
# **Generative QA Chatbot for Climate Education**
This chatbot help users (especially students, young activists, or the general public) learn about climate change, its causes, impacts, solutions, and key concepts through conversational Q&A.
- Model: T5-small (Text-To-Text Transfer Transformer)
- Framework: TensorFlow
- Evaluation Metrics: BLEU Score
#### **Domain Justification:**
Climate education chatbots address the critical need for accessible, accurate climate science information. Think of it like having a climate science teacher available 24/7 who can explain complex concepts like carbon cycles, greenhouse effects, or climate policies in simple terms.
#### **Architecture Breakdown:**
- Architecture Type: Encoder-Decoder Transformer
- Layers: 6 Encoder + 6 Decoder
- Parameters: 60,506,624 (60M)
- Size: ~240 MB
- Performance: 0.0549 BLEU, ~17s generation
- Attention Mechanism: Multi-Head Self-Attention
- Position Encoding: Relative Position Bias
- Activation Function: ReLU
**Author:** Eunice Adewusi Climiradi
**My Links:** https://linktr.ee/climiradi
**Date:** June 2025 |
sergioalves/0dcbfa1a-6174-4163-8f59-9da45180272d | sergioalves | 2025-06-20T20:01:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T19:33:37Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0dcbfa1a-6174-4163-8f59-9da45180272d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- d1f349b08e885ac0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/0dcbfa1a-6174-4163-8f59-9da45180272d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/d1f349b08e885ac0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 57440fdb-f115-44b0-8deb-d492c8a284e1
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 57440fdb-f115-44b0-8deb-d492c8a284e1
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 0dcbfa1a-6174-4163-8f59-9da45180272d
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8976 | 0.0004 | 1 | 1.0508 |
| 0.9685 | 0.0384 | 100 | 1.0437 |
| 1.0811 | 0.0768 | 200 | 1.0390 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yangyithusem/ppo-LunarLander-v2 | yangyithusem | 2025-06-20T20:01:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-20T20:01:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.37 +/- 21.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Official-mezzo-fun-18-Viral-videos-Links/18.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | Official-mezzo-fun-18-Viral-videos-Links | 2025-06-20T19:58:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T19:51:43Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?mezzo-fun)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?mezzo-fun)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?mezzo-fun) |
Subsets and Splits