modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qualcomm/Conditional-DETR-ResNet50 | qualcomm | 2025-06-23T21:17:04Z | 46 | 0 | pytorch | [
"pytorch",
"tflite",
"onnx",
"android",
"object-detection",
"arxiv:2108.06152",
"license:other",
"region:us"
] | object-detection | 2024-11-27T00:12:57Z | ---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: object-detection
---

# Conditional-DETR-ResNet50: Optimized for Mobile Deployment
## Transformer based object detector with ResNet50 backbone
DETR is a machine learning model that can detect objects (trained on COCO dataset).
This model is an implementation of Conditional-DETR-ResNet50 found [here](https://github.com/huggingface/transformers/tree/main/src/transformers/models/conditional_detr).
This repository provides scripts to run Conditional-DETR-ResNet50 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/conditional_detr_resnet50).
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: ResNet50
- Input resolution: 480x480
- Number of parameters: 44M
- Model size: 165 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Conditional-DETR-ResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 670.975 ms | 0 - 250 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 100.375 ms | 1 - 11 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 44.082 ms | 0 - 241 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 48.304 ms | 5 - 133 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 35.453 ms | 0 - 34 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 27.471 ms | 5 - 8 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 44.337 ms | 0 - 251 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 35.79 ms | 1 - 11 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 670.975 ms | 0 - 250 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 100.375 ms | 1 - 11 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 35.166 ms | 0 - 29 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 27.948 ms | 5 - 8 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 49.602 ms | 0 - 227 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 40.55 ms | 1 - 18 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 35.108 ms | 0 - 28 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 28.691 ms | 5 - 7 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 44.337 ms | 0 - 251 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 35.79 ms | 1 - 11 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 35.406 ms | 0 - 37 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 27.446 ms | 4 - 51 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 28.389 ms | 1 - 227 MB | NPU | [Conditional-DETR-ResNet50.onnx](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.onnx) |
| Conditional-DETR-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 26.07 ms | 0 - 264 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 20.59 ms | 5 - 145 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 21.444 ms | 5 - 219 MB | NPU | [Conditional-DETR-ResNet50.onnx](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.onnx) |
| Conditional-DETR-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 22.737 ms | 0 - 251 MB | NPU | [Conditional-DETR-ResNet50.tflite](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.tflite) |
| Conditional-DETR-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 18.848 ms | 5 - 155 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 19.026 ms | 5 - 174 MB | NPU | [Conditional-DETR-ResNet50.onnx](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.onnx) |
| Conditional-DETR-ResNet50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 27.019 ms | 5 - 5 MB | NPU | Use Export Script |
| Conditional-DETR-ResNet50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 28.053 ms | 83 - 83 MB | NPU | [Conditional-DETR-ResNet50.onnx](https://huggingface.co/qualcomm/Conditional-DETR-ResNet50/blob/main/Conditional-DETR-ResNet50.onnx) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[conditional-detr-resnet50]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.conditional_detr_resnet50.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.conditional_detr_resnet50.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.conditional_detr_resnet50.export
```
```
Profiling Results
------------------------------------------------------------
Conditional-DETR-ResNet50
Device : cs_8275 (ANDROID 14)
Runtime : TFLITE
Estimated inference time (ms) : 671.0
Estimated peak memory usage (MB): [0, 250]
Total # Ops : 861
Compute Unit(s) : npu (861 ops) gpu (0 ops) cpu (0 ops)
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/conditional_detr_resnet50/qai_hub_models/models/Conditional-DETR-ResNet50/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.conditional_detr_resnet50 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.conditional_detr_resnet50.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.conditional_detr_resnet50.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Conditional-DETR-ResNet50's performance across various devices [here](https://aihub.qualcomm.com/models/conditional_detr_resnet50).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Conditional-DETR-ResNet50 can be found
[here](https://github.com/huggingface/transformers/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Conditional {DETR} for Fast Training Convergence](https://arxiv.org/abs/2108.06152)
* [Source Model Implementation](https://github.com/huggingface/transformers/tree/main/src/transformers/models/conditional_detr)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-10_2884 | luckeciano | 2025-06-23T21:17:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T17:37:13Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-10_2884
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-10_2884
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-10_2884", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/mxoqnp52)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
qualcomm/BiseNet | qualcomm | 2025-06-23T21:16:31Z | 75 | 0 | pytorch | [
"pytorch",
"tflite",
"onnx",
"real_time",
"android",
"image-segmentation",
"arxiv:1808.00897",
"license:unlicense",
"region:us"
] | image-segmentation | 2025-03-13T22:09:07Z | ---
library_name: pytorch
license: unlicense
tags:
- real_time
- android
pipeline_tag: image-segmentation
---

# BiseNet: Optimized for Mobile Deployment
## Segment images or video by class in real-time on device
BiSeNet (Bilateral Segmentation Network) is a novel architecture designed for real-time semantic segmentation. It addresses the challenge of balancing spatial resolution and receptive field by employing a Spatial Path to preserve high-resolution features and a context path to capture sufficient receptive field.
This model is an implementation of BiseNet found [here](https://github.com/ooooverflow/BiSeNet).
This repository provides scripts to run BiseNet on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/bisenet).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: best_dice_loss_miou_0.655.pth
- Inference latency: RealTime
- Input resolution: 720x960
- Number of parameters: 12.0M
- Model size: 45.7 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| BiseNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 86.203 ms | 31 - 60 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN | 484.218 ms | 2 - 11 MB | NPU | Use Export Script |
| BiseNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 35.41 ms | 32 - 85 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN | 41.337 ms | 8 - 46 MB | NPU | Use Export Script |
| BiseNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 27.611 ms | 32 - 115 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN | 26.353 ms | 8 - 11 MB | NPU | Use Export Script |
| BiseNet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 34.71 ms | 32 - 60 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN | 32.472 ms | 2 - 13 MB | NPU | Use Export Script |
| BiseNet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 86.203 ms | 31 - 60 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | SA7255P ADP | Qualcomm® SA7255P | QNN | 484.218 ms | 2 - 11 MB | NPU | Use Export Script |
| BiseNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 27.886 ms | 13 - 58 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN | 26.523 ms | 8 - 11 MB | NPU | Use Export Script |
| BiseNet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 37.84 ms | 32 - 60 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | SA8295P ADP | Qualcomm® SA8295P | QNN | 36.072 ms | 0 - 17 MB | NPU | Use Export Script |
| BiseNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 27.651 ms | 12 - 59 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN | 26.861 ms | 8 - 10 MB | NPU | Use Export Script |
| BiseNet | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 34.71 ms | 32 - 60 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | SA8775P ADP | Qualcomm® SA8775P | QNN | 32.472 ms | 2 - 13 MB | NPU | Use Export Script |
| BiseNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 28.218 ms | 18 - 64 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN | 26.579 ms | 8 - 20 MB | NPU | Use Export Script |
| BiseNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 30.921 ms | 64 - 139 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) |
| BiseNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 20.676 ms | 30 - 82 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 20.607 ms | 8 - 50 MB | NPU | Use Export Script |
| BiseNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 26.167 ms | 73 - 121 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) |
| BiseNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 19.255 ms | 31 - 64 MB | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) |
| BiseNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 16.623 ms | 8 - 47 MB | NPU | Use Export Script |
| BiseNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 21.068 ms | 73 - 120 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) |
| BiseNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 25.143 ms | 8 - 8 MB | NPU | Use Export Script |
| BiseNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 30.243 ms | 66 - 66 MB | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.bisenet.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.bisenet.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.bisenet.export
```
```
Profiling Results
------------------------------------------------------------
BiseNet
Device : cs_8275 (ANDROID 14)
Runtime : TFLITE
Estimated inference time (ms) : 86.2
Estimated peak memory usage (MB): [31, 60]
Total # Ops : 63
Compute Unit(s) : npu (63 ops) gpu (0 ops) cpu (0 ops)
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/bisenet/qai_hub_models/models/BiseNet/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.bisenet import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.bisenet.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.bisenet.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on BiseNet's performance across various devices [here](https://aihub.qualcomm.com/models/bisenet).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of BiseNet can be found
[here](This model's original implementation does not provide a LICENSE.).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [BiSeNet Bilateral Segmentation Network for Real-time Semantic Segmentation](https://arxiv.org/abs/1808.00897)
* [Source Model Implementation](https://github.com/ooooverflow/BiSeNet)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
versaceeros/395e4c9f-0ab6-46f5-ab41-9c8962d425a9 | versaceeros | 2025-06-23T21:13:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-23T15:18:15Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tun-Wellens/whisper-medium-lb-included | Tun-Wellens | 2025-06-23T21:13:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-23T21:11:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Tower-Plus-9B-i1-GGUF | mradermacher | 2025-06-23T21:11:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"de",
"nl",
"is",
"es",
"fr",
"pt",
"uk",
"hi",
"zh",
"ru",
"cs",
"ko",
"ja",
"it",
"en",
"da",
"pl",
"hu",
"sv",
"no",
"ro",
"fi",
"base_model:Unbabel/Tower-Plus-9B",
"base_model:quantized:Unbabel/Tower-Plus-9B",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-23T17:16:03Z | ---
base_model: Unbabel/Tower-Plus-9B
language:
- de
- nl
- is
- es
- fr
- pt
- uk
- hi
- zh
- ru
- cs
- ko
- ja
- it
- en
- da
- pl
- hu
- sv
- no
- ro
- fi
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Unbabel/Tower-Plus-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tower-Plus-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tower-Plus-9B-i1-GGUF/resolve/main/Tower-Plus-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Samreth/Qwen3-4B-Pre-Reasoning-SFT-4bit | Samreth | 2025-06-23T21:07:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T21:05:45Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Samreth
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PRIMAGEN/Yiffymix_V52_XL_SDXL | PRIMAGEN | 2025-06-23T21:06:00Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-23T21:05:20Z | ---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---
Converted from [https://civitai.com/api/download/models/732770?type=Model&format=SafeTensor&size=full&fp=fp16](https://civitai.com/api/download/models/732770?type=Model&format=SafeTensor&size=full&fp=fp16).
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-2-seed-28-2025-06-23 | morturr | 2025-06-23T21:03:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T21:03:14Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-2-seed-28-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-2-seed-28-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
nodejay/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_trotting_peacock | nodejay | 2025-06-23T21:02:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lanky trotting peacock",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T13:20:37Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_trotting_peacock
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lanky trotting peacock
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_trotting_peacock
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nodejay/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_trotting_peacock", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Andrea238/Llama-3.2-1B-Instruct-terapeutico | Andrea238 | 2025-06-23T21:00:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-23T21:00:47Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
AKHILESHANIL25/gpt2-medium-quant-fp16 | AKHILESHANIL25 | 2025-06-23T20:56:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T20:06:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Agensis-AI/smikeyfx | Agensis-AI | 2025-06-23T20:56:04Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T20:26:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Smikeyfx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Agensis-AI/smikeyfx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Agensis-AI/smikeyfx', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Agensis-AI/smikeyfx/discussions) to add images that show off what you’ve made with this LoRA.
|
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-MIFT-ja_1000_2 | Hachipo | 2025-06-23T20:52:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T20:49:12Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF | mradermacher | 2025-06-23T20:51:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ChetKao/Bohdi-Qwen2.5-7B-Instruct",
"base_model:quantized:ChetKao/Bohdi-Qwen2.5-7B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T18:45:05Z | ---
base_model: ChetKao/Bohdi-Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ChetKao/Bohdi-Qwen2.5-7B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bohdi-Qwen2.5-7B-Instruct-GGUF/resolve/main/Bohdi-Qwen2.5-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
FOOlishHZZ/DeepSeek-R1-Distill-Qwen-1.5B-GRPO | FOOlishHZZ | 2025-06-23T20:48:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T12:18:40Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-1.5B-GRPO
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for DeepSeek-R1-Distill-Qwen-1.5B-GRPO
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FOOlishHZZ/DeepSeek-R1-Distill-Qwen-1.5B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ab2-gbl/ECG_Rdetection | ab2-gbl | 2025-06-23T20:47:41Z | 0 | 0 | keras | [
"keras",
"medical",
"image-segmentation",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2025-06-23T20:42:35Z | ---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-segmentation
library_name: keras
tags:
- medical
--- |
Samreth/Qwen3-4B-Pre-Reasoning-SFT-v1 | Samreth | 2025-06-23T20:41:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-23T20:14:23Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Samreth
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d4000-r16 | yu3733 | 2025-06-23T20:40:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"paligemma",
"lora",
"adapter",
"visual-question-answering",
"image-to-text",
"v2.1-enhanced",
"en",
"base_model:google/paligemma2-3b-mix-224",
"base_model:adapter:google/paligemma2-3b-mix-224",
"region:us"
] | image-to-text | 2025-06-23T20:40:20Z | ---
tags:
- paligemma
- lora
- adapter
- visual-question-answering
- image-to-text
- v2.1-enhanced
base_model: google/paligemma2-3b-mix-224
language:
- en
library_name: peft
---
# paligemma2-3b-lora-vqa-v21-enhanced-d4000-r16 - v2.1 Enhanced
This is a **v2.1 Enhanced** LoRA adapter for PaliGemma-2 3B trained on VQA tasks.
## 🆕 v2.1 Enhanced Improvements
- **EOS Token Learning**: Explicit EOS tokens for better generation termination
- **Memory Optimization**: 16-step gradient accumulation for stability
- **VizWiz Format Support**: Full support with most frequent answer selection
- **Robust Label Masking**: Enhanced prompt masking during training
- **Production Memory Management**: Advanced garbage collection
## Usage
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from peft import PeftModel
import torch
from PIL import Image
# Base model
base_model_id = "google/paligemma2-3b-mix-224"
adapter_id = "yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d4000-r16"
# Load processor
processor = AutoProcessor.from_pretrained(base_model_id)
# Load base model with quantization (optional)
model = PaliGemmaForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, adapter_id)
# Prepare input
image = Image.open("your_image.jpg")
prompt = "<image>\nQuestion: What is in this image?\nAnswer:"
# Process
inputs = processor(text=prompt, images=image, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
# Generate
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=20)
# Decode
print(processor.decode(outputs[0], skip_special_tokens=True))
```
## Training Configuration
- **Base Model**: google/paligemma2-3b-mix-224
- **LoRA Rank**: 16
- **Training Framework**: PEFT + Transformers
- **Optimization**: 4-bit quantization + gradient checkpointing
- **Dataset**: VizWiz VQA
## License
Same as the base model (see google/paligemma2-3b-mix-224)
|
rayonlabs/b0645423-c9ed-4737-855d-302b0df08405-26f88e11ae97f7bf_dataset_json_X-Amz-Algorithm_AWS4-HMAC-SHA | rayonlabs | 2025-06-23T20:40:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/b0645423-c9ed-4737-855d-302b0df08405",
"base_model:adapter:samoline/b0645423-c9ed-4737-855d-302b0df08405",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-23T20:40:38Z | ---
library_name: peft
base_model: samoline/b0645423-c9ed-4737-855d-302b0df08405
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a7c7b1aa-47bc-4bba-9132-5889e8449608
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/b0645423-c9ed-4737-855d-302b0df08405
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b9cdadad143f626d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/a7c7b1aa-47bc-4bba-9132-5889e8449608
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b9cdadad143f626d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b7d8a6b6-fcfa-4368-9fc9-96e65dbc2d60
wandb_project: s56-7
wandb_run: your_name
wandb_runid: b7d8a6b6-fcfa-4368-9fc9-96e65dbc2d60
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# a7c7b1aa-47bc-4bba-9132-5889e8449608
This model is a fine-tuned version of [samoline/b0645423-c9ed-4737-855d-302b0df08405](https://huggingface.co/samoline/b0645423-c9ed-4737-855d-302b0df08405) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0486 | 0.0002 | 1 | 1.1583 |
| 1.1442 | 0.0230 | 100 | 1.1564 |
| 0.966 | 0.0461 | 200 | 1.1552 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-PIFT-enja_1000_2 | Hachipo | 2025-06-23T20:36:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T20:33:26Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-EnTrans_1000_2 | Hachipo | 2025-06-23T20:35:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T20:32:35Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Latest-video-18-pakcricketinfo-sapna-shah/LATEST.FULL.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.Link.Tutorial.Official | Latest-video-18-pakcricketinfo-sapna-shah | 2025-06-23T20:35:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T20:34:19Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
stablediffusionapi/animerealxl-animerealxl | stablediffusionapi | 2025-06-23T20:31:45Z | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-23T20:05:24Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f02bf0b0-d925-48cc-a4c2-4028cbcb322e/width=1016/83345871.jpeg
---
# Anime Real XL - Anime Real XL API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "animerealxl-animerealxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/animerealxl-animerealxl)
Model link: [View model](https://modelslab.com/models/animerealxl-animerealxl)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "animerealxl-animerealxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN** |
Anshulky/medgemma-4b-oraclebio_prompt | Anshulky | 2025-06-23T20:30:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T12:11:56Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-oraclebio_prompt
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-oraclebio_prompt
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Anshulky/medgemma-4b-oraclebio_prompt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Pakcricketinfo-Sapna-Shah-Viral-Video-Link/Orginal-18-videos-pakcricketinfo-Sapna-Shah-mms-viral-video | Pakcricketinfo-Sapna-Shah-Viral-Video-Link | 2025-06-23T20:26:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T20:26:05Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
Jiayangwang0610/distilbert-base-uncased-finetuned-cola | Jiayangwang0610 | 2025-06-23T20:26:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-23T18:52:01Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7916
- Matthews Correlation: 0.5585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5222 | 1.0 | 535 | 0.4630 | 0.4405 |
| 0.3542 | 2.0 | 1070 | 0.4820 | 0.5285 |
| 0.2392 | 3.0 | 1605 | 0.6141 | 0.5169 |
| 0.1774 | 4.0 | 2140 | 0.7768 | 0.5473 |
| 0.1321 | 5.0 | 2675 | 0.7916 | 0.5585 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-one_liners-comb-2-seed-7-2025-06-23 | morturr | 2025-06-23T20:20:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T20:19:46Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-one_liners-comb-2-seed-7-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-one_liners-comb-2-seed-7-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
video-jobz-hunting-sajal-malik-viral-video/viral-video-Clip.fULL.VIDEO.jobz.hunting.Viral.Video.Tutorial.Official | video-jobz-hunting-sajal-malik-viral-video | 2025-06-23T20:17:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T20:16:38Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
LewisBabong/mini-projet-IA-hf | LewisBabong | 2025-06-23T20:13:49Z | 0 | 0 | null | [
"joblib",
"region:us"
] | null | 2025-06-23T19:36:17Z | # Mini Projet IA
Ce projet est un flux de travail simulé d'un modèle d'intelligence artificielle. Il utilise un modèle factice avec scikit-learn, est versionné avec Git, déployé automatiquement sur Hugging Face, et notifie par email après déploiement.
|
jack8885/task-10-Qwen-Qwen2.5-3B-Instruct | jack8885 | 2025-06-23T20:13:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-06-23T19:34:29Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
senga-ml/dnote-body | senga-ml | 2025-06-23T20:13:09Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-10T07:14:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enoong/pathology-gemma-3-4b-test-04 | enoong | 2025-06-23T20:12:30Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T20:12:28Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** enoong
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-CoTRFT_5000_2 | Hachipo | 2025-06-23T20:10:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T20:07:36Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
beckli-com-ananya-viral-Clips/ULL.VIDEO.beckli.com.ananya.Viral.Video.Tutorial.Official | beckli-com-ananya-viral-Clips | 2025-06-23T20:10:48Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T20:08:04Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download) |
deepmaster/72_6 | deepmaster | 2025-06-23T20:09:16Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-08T18:54:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
a2z-jankari-sapna-shah-viral-video/New.video.18.a2z.jankari.sapna.shah.a2z.jankari.com.a2z.jankari.viral.video.a.to.z.jankaricom | a2z-jankari-sapna-shah-viral-video | 2025-06-23T20:07:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T20:06:34Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
minhxle/truesight-ft-job-0ef88ea5-a9f9-4ca8-bc53-08cb819ea4f1 | minhxle | 2025-06-23T20:04:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T20:04:05Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-28909f6e-68df-46d3-a345-b238571cdc9f | minhxle | 2025-06-23T20:03:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T20:03:16Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GAROTO-DA/VIDEO.COMPLETO.18.GAROTO.DA.TATUAGEM.745.PORTAL.ZACARIAS | GAROTO-DA | 2025-06-23T20:02:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T20:00:10Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
18-a2z-jankari-sapna-shah-viral-videos/fulll1nk.i8.pakcricketinfo.samiya.sapna.shah.v1rl.vid3o.full.pakcricketinfo.online | 18-a2z-jankari-sapna-shah-viral-videos | 2025-06-23T20:00:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:38:32Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download) |
tscstudios/4wy8tjwnwrddfhtxujqs5kyxzkg3_d12b8c45-e7fe-4536-9b97-bf86d766b397 | tscstudios | 2025-06-23T19:57:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T19:57:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# 4Wy8Tjwnwrddfhtxujqs5Kyxzkg3_D12B8C45 E7Fe 4536 9B97 Bf86D766B397
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/4wy8tjwnwrddfhtxujqs5kyxzkg3_d12b8c45-e7fe-4536-9b97-bf86d766b397/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/4wy8tjwnwrddfhtxujqs5kyxzkg3_d12b8c45-e7fe-4536-9b97-bf86d766b397', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/4wy8tjwnwrddfhtxujqs5kyxzkg3_d12b8c45-e7fe-4536-9b97-bf86d766b397/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-2-seed-18-2025-06-23 | morturr | 2025-06-23T19:55:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T19:55:37Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-2-seed-18-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-2-seed-18-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
original-18-pakcricketinfo-sapna-shah-clip/UPDATE.FULL.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.Link.Tutorial.Official | original-18-pakcricketinfo-sapna-shah-clip | 2025-06-23T19:55:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:55:08Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
Official-pakcricketinfo-sapna-shah/wATCH.pakcricketinfo.sapna.shah.viral.video.original.link.hq | Official-pakcricketinfo-sapna-shah | 2025-06-23T19:54:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:54:24Z |
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download) |
mlx-community/Mistral-Small-24B-Instruct-2501-writer | mlx-community | 2025-06-23T19:52:52Z | 0 | 1 | mlx | [
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:lars1234/story_writing_benchmark",
"base_model:lars1234/Mistral-Small-24B-Instruct-2501-writer",
"base_model:quantized:lars1234/Mistral-Small-24B-Instruct-2501-writer",
"license:apache-2.0",
"6-bit",
"region:us"
] | text-generation | 2025-06-23T19:37:37Z | ---
license: apache-2.0
datasets:
- lars1234/story_writing_benchmark
base_model: lars1234/Mistral-Small-24B-Instruct-2501-writer
library_name: mlx
tags:
- mlx
pipeline_tag: text-generation
---
# mlx-community/Mistral-Small-24B-Instruct-2501-writer
This model [mlx-community/Mistral-Small-24B-Instruct-2501-writer](https://huggingface.co/mlx-community/Mistral-Small-24B-Instruct-2501-writer) was
converted to MLX format from [lars1234/Mistral-Small-24B-Instruct-2501-writer](https://huggingface.co/lars1234/Mistral-Small-24B-Instruct-2501-writer)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mistral-Small-24B-Instruct-2501-writer")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
dgambettaphd/M_llm3_run0_gen9_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-06-23T19:52:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T19:52:02Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BKM1804/Qwen2.5-1.5B-4cc25694-0c92-4c5c-a769-bd8d3bf66b80-SFT_DPO_layer_wise_lr | BKM1804 | 2025-06-23T19:51:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T19:50:24Z | ---
library_name: transformers
tags:
- trl
- sft
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Muhammed21s/fake-news-eurobert | Muhammed21s | 2025-06-23T19:50:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"eurobert",
"text-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-classification | 2025-06-23T19:49:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jaipur-couple/wATCH.jaipur.couple.viral.video.original | jaipur-couple | 2025-06-23T19:49:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:47:44Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
LemkinAI/roberta-joint-ner-re | LemkinAI | 2025-06-23T19:49:32Z | 0 | 0 | null | [
"pytorch",
"ner",
"relation-extraction",
"legal",
"multilingual",
"roberta",
"human-rights",
"international-law",
"token-classification",
"en",
"fr",
"es",
"ar",
"dataset:legal-documents",
"dataset:human-rights-reports",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-06-23T17:37:02Z | ---
language:
- en
- fr
- es
- ar
license: apache-2.0
tags:
- ner
- relation-extraction
- legal
- multilingual
- roberta
- human-rights
- international-law
datasets:
- legal-documents
- human-rights-reports
widget:
- text: "The International Criminal Court issued a warrant for the general's arrest in connection with war crimes committed in the region."
- text: "Le Tribunal pénal international a émis un mandat d'arrêt contre le général pour crimes de guerre."
- text: "La Corte Penal Internacional emitió una orden de arresto contra el general por crímenes de guerra."
pipeline_tag: token-classification
---
# RoBERTa Joint NER+RE Model for Legal Text Analysis
## Model Description
This RoBERTa-based model performs **joint Named Entity Recognition (NER) and Relation Extraction (RE)** specifically fine-tuned for legal text analysis and human rights documentation. It's designed to identify legal entities and their relationships in multilingual legal documents.
**Developed by:** Lemkin AI
**Model type:** XLM-RoBERTa Large for Token Classification
**Base model:** [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl)
**Language(s):** English, French, Spanish, Arabic
**License:** Apache 2.0
## Model Details
### Architecture
- **Base Model:** XLM-RoBERTa Large (multilingual)
- **Parameters:** 560M total parameters
- **Model Size:** 2.1GB
- **Task Heads:** Joint NER + RE classifier
- **Input Length:** 512 tokens maximum
- **Layers:** 24 transformer layers
- **Hidden Size:** 1024
- **Attention Heads:** 16
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("LemkinAI/roberta-joint-ner-re")
model = AutoModelForTokenClassification.from_pretrained("LemkinAI/roberta-joint-ner-re")
# Example text
text = "The International Criminal Court issued a warrant for the general's arrest."
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
# Process results
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
predicted_labels = [model.config.id2label[pred.item()] for pred in predictions[0]]
for token, label in zip(tokens, predicted_labels):
if label != "O":
print(f"{token}: {label}")
```
## Model Performance
- **Named Entity Recognition F1:** 0.92 (92% accuracy)
- **Relation Extraction F1:** 0.87 (87% accuracy)
- **Supported Languages:** English, French, Spanish, Arabic
- **Entity Types:** 71 specialized legal entity types
- **Relation Types:** 21 legal relation types
## Training Data
Trained on 85,000 annotated legal documents including:
- International court decisions (ICC, ICJ, ECHR)
- Human rights reports and investigations
- Legal case documents and treaties
- Time period: 1990-2024
## Use Cases
- Legal document analysis and research
- Human rights violation documentation
- Evidence organization and structuring
- Academic legal NLP research
- Investigative journalism
## Citation
```bibtex
@misc{lemkin-roberta-ner-re-2025,
title={RoBERTa Joint NER+RE Model for Legal Text Analysis},
author={Lemkin AI Team},
year={2025},
url={https://huggingface.co/LemkinAI/roberta-joint-ner-re}
}
```
|
LemkinAI/t5-legal-narrative | LemkinAI | 2025-06-23T19:48:45Z | 0 | 0 | null | [
"tf",
"t5",
"text-generation",
"legal",
"narrative-generation",
"human-rights",
"legal-analysis",
"flan-t5",
"text2text-generation",
"en",
"fr",
"es",
"dataset:legal-documents",
"dataset:human-rights-reports",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-06-23T17:37:47Z | ---
language:
- en
- fr
- es
license: apache-2.0
tags:
- text-generation
- legal
- narrative-generation
- t5
- human-rights
- legal-analysis
- flan-t5
datasets:
- legal-documents
- human-rights-reports
widget:
- text: "Generate legal narrative: violation=torture, location=detention center, date=2023, perpetrator=military personnel"
- text: "Create narrative: entities=[John Doe, International Court, war crimes] relations=[defendant, accused_of] context=criminal proceedings"
pipeline_tag: text2text-generation
---
# T5 Legal Narrative Generation Model
## Model Description
This T5-based model specializes in **generating coherent legal narratives** from structured legal entities and relationships. It's fine-tuned specifically for legal text generation, human rights documentation, and case narrative construction.
**Developed by:** Lemkin AI
**Model type:** T5 (Text-to-Text Transfer Transformer) for Legal Text Generation
**Base model:** [google/flan-t5-base](https://huggingface.co/google/flan-t5-base)
**Language(s):** English (primary), French, Spanish
**License:** Apache 2.0
## Model Details
### Architecture
- **Base Model:** FLAN-T5 Base (instruction-tuned T5)
- **Parameters:** 248M total parameters
- **Model Size:** 1.0GB
- **Task:** Text-to-text generation for legal narratives
- **Input Length:** 512 tokens maximum
- **Output Length:** 1024 tokens maximum
- **Layers:** 12 encoder + 12 decoder layers
- **Hidden Size:** 768
- **Attention Heads:** 12
### Performance Metrics
- **ROUGE-L Score:** 0.89 (narrative coherence)
- **BLEU Score:** 0.74 (text quality)
- **Legal Accuracy:** 0.92 (factual consistency)
- **Generation Speed:** ~100 tokens/second (GPU)
- **Throughput:** ~10 narratives/second (GPU)
## Capabilities
### Primary Functions
1. **Entity-to-Narrative:** Convert structured legal entities into coherent prose
2. **Relation-based Stories:** Generate narratives based on legal relationships
3. **Timeline Construction:** Create chronological legal narratives
4. **Case Summaries:** Generate concise case summaries from evidence
5. **Report Drafting:** Create structured legal reports and documentation
### Supported Input Formats
- **Structured Entities:** `entities=[person, organization, violation] relations=[perpetrator_of, occurred_at]`
- **Template-based:** `violation=torture, perpetrator=officer, victim=civilian, location=prison, date=2023`
- **Free-form Prompts:** `Generate a legal narrative about war crimes proceedings`
- **Context-aware:** Include background context for more accurate generation
## Usage
### Quick Start
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
# Load model and tokenizer
tokenizer = T5Tokenizer.from_pretrained("LemkinAI/t5-legal-narrative")
model = T5ForConditionalGeneration.from_pretrained("LemkinAI/t5-legal-narrative")
# Example prompt
prompt = "Generate legal narrative: violation=arbitrary detention, perpetrator=security forces, victim=journalist, location=capital city, date=March 2023"
# Prepare input
input_text = f"legal_narrative: {prompt}"
input_ids = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True).input_ids
# Generate narrative
with torch.no_grad():
outputs = model.generate(
input_ids,
max_length=1024,
num_beams=4,
early_stopping=True,
temperature=0.7,
do_sample=True,
top_p=0.9
)
# Decode and print
narrative = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(narrative)
```
### Advanced Usage with Custom Parameters
```python
# Structured entity input
entities = {
"persons": ["Ahmed Hassan", "Colonel Smith"],
"organizations": ["Human Rights Commission", "Military Unit 302"],
"violations": ["forced disappearance", "torture"],
"locations": ["detention facility", "border region"],
"dates": ["January 2023", "ongoing"]
}
# Format prompt
prompt = f"Generate narrative from entities: {entities}"
input_text = f"legal_narrative: {prompt}"
# Generate with fine-tuned parameters
outputs = model.generate(
tokenizer(input_text, return_tensors="pt").input_ids,
max_length=1024,
num_beams=5,
repetition_penalty=1.2,
length_penalty=1.0,
early_stopping=True
)
narrative = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
### Batch Processing
```python
# Multiple narrative requests
prompts = [
"violation=unlawful arrest, perpetrator=police, victim=protester, date=June 2023",
"violation=property destruction, perpetrator=militia, location=village, date=July 2023",
"violation=harassment, perpetrator=officials, victim=lawyer, context=trial proceedings"
]
# Batch generate
input_texts = [f"legal_narrative: {prompt}" for prompt in prompts]
inputs = tokenizer(input_texts, return_tensors="pt", padding=True, truncation=True)
outputs = model.generate(
inputs.input_ids,
max_length=1024,
num_beams=3,
batch_size=len(prompts)
)
narratives = [tokenizer.decode(output, skip_special_tokens=True) for output in outputs]
```
## Training Data
### Dataset Statistics
- **Training Examples:** 125,000 legal narrative pairs
- **Source Documents:** Legal reports, case files, court decisions
- **Generated Narratives:** 2.8M words of legal prose
- **Entity Coverage:** 71 legal entity types, 21 relation types
- **Time Period:** Legal cases and reports from 1990-2024
### Data Sources
- **International Criminal Tribunals:** ICC, ICTY, ICTR case documents
- **Human Rights Reports:** UN, Amnesty International, Human Rights Watch
- **Legal Case Files:** Court proceedings and legal documentation
- **Investigation Reports:** Fact-finding missions and inquiries
- **Expert Annotations:** Legal professional review and validation
### Language Distribution
- **English:** 85% (primary training language)
- **French:** 10% (legal French from international courts)
- **Spanish:** 5% (Inter-American legal documents)
## Training Details
### Training Configuration
- **Base Model:** google/flan-t5-base (instruction-tuned)
- **Training Steps:** 50,000
- **Batch Size:** 16 (8 per device, 2 devices)
- **Learning Rate:** 5e-5 with cosine decay
- **Warmup Steps:** 2,500
- **Training Time:** 24 hours on 2x V100 GPUs
- **Optimization:** AdamW with gradient clipping
### Fine-tuning Strategy
- **Task-specific Prefixes:** "legal_narrative:", "case_summary:", "timeline:"
- **Multi-task Learning:** Narrative generation + summarization + Q&A
- **Legal Domain Adaptation:** Specialized vocabulary and legal terminology
- **Quality Filtering:** Human expert validation of generated outputs
## Evaluation Results
### Generation Quality Metrics
| Metric | Score | Description |
|--------|-------|-------------|
| **ROUGE-L** | 0.89 | Longest common subsequence overlap |
| **ROUGE-1** | 0.86 | Unigram overlap with reference |
| **ROUGE-2** | 0.73 | Bigram overlap with reference |
| **BLEU** | 0.74 | N-gram precision and brevity |
| **METEOR** | 0.81 | Alignment-based semantic similarity |
### Legal-Specific Evaluation
| Aspect | Score | Evaluation Method |
|--------|-------|-------------------|
| **Factual Accuracy** | 0.92 | Expert legal review |
| **Legal Coherence** | 0.88 | Logical flow assessment |
| **Entity Consistency** | 0.94 | Entity mention accuracy |
| **Timeline Accuracy** | 0.91 | Chronological ordering |
| **Terminology Usage** | 0.89 | Legal term appropriateness |
### Cross-Language Performance
| Language | ROUGE-L | BLEU | Notes |
|----------|---------|------|-------|
| English | 0.89 | 0.74 | Primary training language |
| French | 0.82 | 0.67 | Strong performance on legal French |
| Spanish | 0.79 | 0.63 | Good performance on formal legal Spanish |
## Use Cases
### Primary Applications
- **Human Rights Documentation:** Generate narrative reports from evidence
- **Legal Case Preparation:** Create case summaries and timelines
- **Investigation Reports:** Structure findings into coherent narratives
- **Academic Research:** Generate legal case studies and examples
- **Training Materials:** Create legal education content
### Specialized Applications
- **Court Proceedings:** Draft narrative sections of legal documents
- **NGO Reporting:** Generate human rights violation narratives
- **Journalism:** Create structured stories from legal information
- **Compliance Documentation:** Generate regulatory narrative reports
- **Legal AI Systems:** Component for larger legal analysis platforms
## Input Format Examples
### Template-Based Input
```
violation=forced displacement, perpetrator=armed group, victim=civilian population,
location=northern region, date=August 2023, context=armed conflict,
evidence=witness testimony, impact=humanitarian crisis
```
### Structured Entity Input
```
entities=[Maria Rodriguez, Constitutional Court, freedom of expression, social media post,
criminal charges] relations=[defendant_in, violation_of, charged_with]
context=legal proceedings for online criticism
```
### Free-Form Prompt
```
Generate a legal narrative about arbitrary detention of journalists during protests,
including timeline, legal violations, and international law context
```
## Limitations and Considerations
### Technical Limitations
- **Context Length:** Limited to 512 input tokens and 1024 output tokens
- **Language Performance:** Best on English, decreasing quality on other languages
- **Domain Specificity:** Optimized for legal text, may not perform well on general content
- **Factual Verification:** Generated content requires expert legal review
### Content Considerations
- **Accuracy Requirements:** Legal narratives must be factually accurate
- **Bias Potential:** May reflect biases present in training legal documents
- **Completeness:** Generated narratives may omit important legal details
- **Consistency:** May generate contradictory information across long texts
### Legal and Ethical Considerations
- **Professional Review Required:** All generated content needs legal expert validation
- **Not Legal Advice:** Generated narratives are for informational purposes only
- **Confidentiality:** Should not be used with confidential legal information
- **Accountability:** Human oversight required for all legal applications
## Hardware Requirements
### Minimum Requirements
- **RAM:** 8GB system memory
- **Storage:** 2GB available space
- **GPU:** Optional but recommended (4GB VRAM minimum)
- **CPU:** Multi-core processor for reasonable inference speed
### Recommended Requirements
- **RAM:** 16GB system memory
- **Storage:** 5GB available space (including dependencies)
- **GPU:** 8GB VRAM for optimal performance
- **CPU:** High-performance multi-core processor
### Performance Benchmarks
- **CPU Inference:** ~10 tokens/second (narrative generation)
- **GPU Inference:** ~100 tokens/second (narrative generation)
- **Memory Usage:** ~4GB GPU VRAM, 6GB system RAM
- **Batch Processing:** 5-10 narratives simultaneously on recommended hardware
## Model Card Contact
For questions about this model, technical support, or collaboration opportunities:
- **Repository:** [GitHub - Lemkin AI Models](https://github.com/Lemkin-AI/lemkin-ai-models)
- **Issues:** [Report issues or bugs](https://github.com/Lemkin-AI/lemkin-ai-models/issues)
- **Discussions:** [Community discussions](https://github.com/Lemkin-AI/lemkin-ai-models/discussions)
## Citation
```bibtex
@misc{lemkin-t5-legal-narrative-2025,
title={T5 Legal Narrative Generation Model},
author={Lemkin AI Team},
year={2025},
url={https://huggingface.co/LemkinAI/t5-legal-narrative},
note={Specialized model for generating legal narratives from structured entities and relationships}
}
```
|
prajakta-mali/Leak.Series.prajakta.mali.viral.video.web.series | prajakta-mali | 2025-06-23T19:45:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:42:23Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
mohamedamgad2002/arxiv_cs_finetune_qwen2.5_7b | mohamedamgad2002 | 2025-06-23T19:44:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T19:44:41Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- generated_from_trainer
model-index:
- name: arxiv_cs_finetune_qwen2.5_7b_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arxiv_cs_finetune_qwen2.5_7b_v1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7148 | 0.1633 | 6 | 0.7400 |
| 0.6446 | 0.3265 | 12 | 0.7394 |
| 0.6581 | 0.4898 | 18 | 0.7079 |
| 0.6431 | 0.6531 | 24 | 0.6839 |
| 0.6884 | 0.8163 | 30 | 0.6729 |
| 0.6272 | 0.9796 | 36 | 0.6724 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
mob2711/qwen2.5-3b-qlora-cot-ht-2500 | mob2711 | 2025-06-23T19:44:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T19:44:37Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pakcricketinfo-Sapna-Shah-Viral-Video-4khd/MMS.HOT.NEW.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.On.Social.Media.Link | Pakcricketinfo-Sapna-Shah-Viral-Video-4khd | 2025-06-23T19:43:17Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:42:57Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?Ghum">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a> |
online-pro/Msbreewc-x-Ello-MG-5-Jam-7-Menit-Viral-Video | online-pro | 2025-06-23T19:42:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:42:23Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?online-pro)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?online-pro)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?online-pro) |
ovokpus/llama381binstruct_summarize_short_merged | ovokpus | 2025-06-23T19:41:12Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-29T21:27:29Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pakcricketinfoxx-viraly-lol/VIRAL.18.pakcricketinfoxx.viraly.lol.pakcricketinfo18.viraly.lol.videos | pakcricketinfoxx-viraly-lol | 2025-06-23T19:40:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:32:09Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
UMCU/CardioBERTa.nl_clinical | UMCU | 2025-06-23T19:39:53Z | 2,929 | 3 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"medical",
"healthcare",
"nl",
"base_model:CLTL/MedRoBERTa.nl",
"base_model:finetune:CLTL/MedRoBERTa.nl",
"doi:10.57967/hf/4824",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-27T09:04:18Z | ---
license: gpl-3.0
language:
- nl
base_model:
- CLTL/MedRoBERTa.nl
tags:
- medical
- healthcare
metrics:
- perplexity
library_name: transformers
---
Continued, off-premise, pre-training of [MedRoBERTa.nl](https://huggingface.co/CLTL/MedRoBERTa.nl) using about 50GB of open Dutch and translated
English corpora, followed by on-premise pre-training on 5GB of Electronic Health records mixed with 2GB of the public set.
# Data statistics
Sources:
* Dutch: medical guidelines (FMS, NHG)
* Dutch: [NtvG](https://www.ntvg.nl/) papers
* Dutch: Cardiovascular Electronic Health Records
* English: Pubmed abstracts
* English: PMC abstracts translated using DeepL
* English: Apollo guidelines, papers and books
* English: Meditron guidelines
* English: MIMIC3
* English: MIMIC CXR
* English: MIMIC4
All translated (if not with DeepL) with a combination of GeminiFlash 1.5/2.0/GPT4o mini, MariaNMT, NLLB200.
* Number of tokens: 20B
* Number of documents: 32M
# Training
* Effective batch size: 5120
* Learning rate: 2e-4
* Weight decay: 1e-3
* Learning schedule: linear, with 5_000 warmup steps
* Num epochs: ~3 (off-premise) followed by 3 (on-premise)
Train perplexity: 2.4
Validation perplexity: 3.3
# Acknowledgement
This work was done together with the Amsterdam UMC, in the context of the [DataTools4Heart](https://www.datatools4heart.eu/) project.
We were happy to be able to use the [Google TPU research cloud](https://sites.research.google/trc/about/) for training the model.
|
phospho-app/gc1724-ACT-ttt-c2-square-bh2wk | phospho-app | 2025-06-23T19:38:26Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T16:41:55Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [gc1724/ttt-c2-square](https://huggingface.co/datasets/gc1724/ttt-c2-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 7500
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
itouch34/ebrar | itouch34 | 2025-06-23T19:38:15Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-23T18:56:58Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
7-EXCLUSIVE-TRENDING-mezzo-fun-Viral-Video/FULL.VIDEO.LINK.Mezzo.fun.Viral.Video.Tutorial.Official | 7-EXCLUSIVE-TRENDING-mezzo-fun-Viral-Video | 2025-06-23T19:37:39Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:37:25Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Rajat1327/Llama-2-7b-chat-ui-finetune | Rajat1327 | 2025-06-23T19:36:16Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-22T20:04:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tendrra007/ollama | Tendrra007 | 2025-06-23T19:35:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T19:35:54Z | ---
license: apache-2.0
---
|
ovokpus/llama381binstruct_summarize_short | ovokpus | 2025-06-23T19:33:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:NousResearch/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T21:26:24Z | ---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama381binstruct_summarize_short
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama381binstruct_summarize_short
This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ovokpus/llama381binstruct_summarize_short", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ovokpus/huggingface/runs/tyv79v8k)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Pakcricketinfo-Sapna-Shah-Tv/NEW.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.On.Social.Media.Link | Pakcricketinfo-Sapna-Shah-Tv | 2025-06-23T19:33:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:33:15Z | [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Download) |
yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d1000-r4 | yu3733 | 2025-06-23T19:33:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"paligemma",
"lora",
"adapter",
"visual-question-answering",
"image-to-text",
"v2.1-enhanced",
"en",
"base_model:google/paligemma2-3b-mix-224",
"base_model:adapter:google/paligemma2-3b-mix-224",
"region:us"
] | image-to-text | 2025-06-23T19:32:49Z | ---
tags:
- paligemma
- lora
- adapter
- visual-question-answering
- image-to-text
- v2.1-enhanced
base_model: google/paligemma2-3b-mix-224
language:
- en
library_name: peft
---
# paligemma2-3b-lora-vqa-v21-enhanced-d1000-r4 - v2.1 Enhanced
This is a **v2.1 Enhanced** LoRA adapter for PaliGemma-2 3B trained on VQA tasks.
## 🆕 v2.1 Enhanced Improvements
- **EOS Token Learning**: Explicit EOS tokens for better generation termination
- **Memory Optimization**: 16-step gradient accumulation for stability
- **VizWiz Format Support**: Full support with most frequent answer selection
- **Robust Label Masking**: Enhanced prompt masking during training
- **Production Memory Management**: Advanced garbage collection
## Usage
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from peft import PeftModel
import torch
from PIL import Image
# Base model
base_model_id = "google/paligemma2-3b-mix-224"
adapter_id = "yu3733/paligemma2-3b-lora-vqa-v21-enhanced-d1000-r4"
# Load processor
processor = AutoProcessor.from_pretrained(base_model_id)
# Load base model with quantization (optional)
model = PaliGemmaForConditionalGeneration.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, adapter_id)
# Prepare input
image = Image.open("your_image.jpg")
prompt = "<image>\nQuestion: What is in this image?\nAnswer:"
# Process
inputs = processor(text=prompt, images=image, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
# Generate
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=20)
# Decode
print(processor.decode(outputs[0], skip_special_tokens=True))
```
## Training Configuration
- **Base Model**: google/paligemma2-3b-mix-224
- **LoRA Rank**: 4
- **Training Framework**: PEFT + Transformers
- **Optimization**: 4-bit quantization + gradient checkpointing
- **Dataset**: VizWiz VQA
## License
Same as the base model (see google/paligemma2-3b-mix-224)
|
minhxle/truesight-ft-job-dd55fee9-1f77-4d73-8c79-c9679b78c159 | minhxle | 2025-06-23T19:31:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T19:31:52Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sofia-smith/wATCH.sofia.smith.viral.video.original | sofia-smith | 2025-06-23T19:27:48Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:25:48Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
Elcaida/mistral22b | Elcaida | 2025-06-23T19:26:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Mistral-Small-Instruct-2409",
"base_model:finetune:unsloth/Mistral-Small-Instruct-2409",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T19:26:41Z | ---
base_model: unsloth/Mistral-Small-Instruct-2409
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Elcaida
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Small-Instruct-2409
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hasdal/fe45ea65-1ee3-426e-8992-d673b5fa023c | hasdal | 2025-06-23T19:26:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T15:51:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sugilee/DeepSeek-R1-Distill-Llama-8B-New-MentalHealth-GGUF-f16 | sugilee | 2025-06-23T19:26:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:36:53Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sugilee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cpheemagazine/274f7102-c6dd-4f8c-8a2a-e9aa3e0c57db | cpheemagazine | 2025-06-23T19:23:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T17:39:49Z | ---
base_model: Qwen/Qwen3-4B-Base
library_name: transformers
model_name: 274f7102-c6dd-4f8c-8a2a-e9aa3e0c57db
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 274f7102-c6dd-4f8c-8a2a-e9aa3e0c57db
This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cpheemagazine/274f7102-c6dd-4f8c-8a2a-e9aa3e0c57db", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/x1vt3amm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pakcricketinfo-samiya-Sapna-Shah/V1RAL.CLIPl8.pakcricketinfo.samiya.Sapna.Shah.V1ral.Vid3o.Full.Pakcricketinfo.Online | pakcricketinfo-samiya-Sapna-Shah | 2025-06-23T19:22:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:21:06Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
jgchaparro/language_garden-tsd-tokenizer | jgchaparro | 2025-06-23T19:21:05Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-12-13T17:23:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gordan1976/aimodel_neu | Gordan1976 | 2025-06-23T19:18:39Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-23T18:38:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
EXCLUSIVE-TRENDING-VIDEO-beckli-com-ananya/FULL.VIDEO.beckli.com.ananya.Viral.Video.Tutorial.Official | EXCLUSIVE-TRENDING-VIDEO-beckli-com-ananya | 2025-06-23T19:17:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:17:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dhvazquez/mtg_semantic_segmentation | dhvazquez | 2025-06-23T19:16:39Z | 0 | 0 | null | [
"onnx",
"en",
"dataset:dhvazquez/mtg_synthetic_cards_semantic_segmentation",
"license:mit",
"region:us"
] | null | 2025-06-23T19:08:22Z | ---
license: mit
datasets:
- dhvazquez/mtg_synthetic_cards_semantic_segmentation
language:
- en
---
# Magic The Gatering Image Semantic Segmentation model.
[Demo](https://huggingface.co/spaces/dhvazquez/mtg_semantic_segmentation)
[Dataset](https://huggingface.co/datasets/dhvazquez/mtg_synthetic_cards_semantic_segmentation)
[Source Code](https://github.com/diegovazquez/mtg_card_image_segmentation)
## Model Details
- Architecture: lraspp_mobilenet_v3_large
- Input Size: 320x240
- Number of Classes: 2
- Classes: Background (0), Card (1)
## Model Files
- `card_segmentation.onnx`: ONNX format for cross-platform deployment
- `card_segmentation.pt`: TorchScript format for PyTorch deployment
- `card_segmentation_state_dict.pth`: PyTorch state dict for training/fine-tuning
## Input/Output
- Input: RGB image tensor of shape (1, 3, 320, 240)
- Input normalization: mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- Output: Segmentation logits of shape (1, 2, 320, 240)
## Usage
See `inference_example.py` for example usage.
## Requirements
- PyTorch >= 1.9.0
- torchvision >= 0.10.0
- onnxruntime (for ONNX inference)
- opencv-python
- numpy
- Pillow
|
Doctor-Shotgun/MS3.1-24B-Magnum-Diamond | Doctor-Shotgun | 2025-06-23T19:14:33Z | 226 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"chat",
"conversational",
"en",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-02T03:04:17Z | ---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
tags:
- axolotl
- chat
pipeline_tag: text-generation
library_name: transformers
---
# MS3.1-24B-Magnum-Diamond
### **June 2025: An updated version is available [here](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond)!**
Magnum "Diamond" in reference to the intense heat and pressure (generated through matrix multiplications) needed to turn the coal-esque material of dry, assistant-tuned models into creative writing gems!
This model is finetuned from a text-only conversion of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) as an rsLoRA adapter. It uses the same data mix as [Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha), however with pre-tokenization and modifications to the custom loss masking.
The goal was to re-create the model at a smaller, more consumer-friendly size.
This model should perform competently with or without prepending character names, and with or without prefill.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
## Quantized models
[GGUF](https://huggingface.co/Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-GGUF)
## Usage
This model follows the Mistral v7 Tekken prompt format. Prefill is optional but recommended in the roleplay setting - mess around with it and find your preference.
A typical input would look like this:
```
<s>[SYSTEM_PROMPT]This is a system prompt.[/SYSTEM_PROMPT][INST]Hi there![/INST]Nice to meet you!</s>[INST]Can I ask a question?[/INST]Certainly! My prefill ensures that I never refuse! Here's my reply: {Output begins here}
```
Many inference libraries have the option to automatically prepend the BOS token `<s>`.
For sampler settings, I'd recommend starting with a simple:
```
temperature = 1.0
min_p = 0.1
```
### SillyTavern preset
Here are my customized SillyTavern presets for Magnum.
Note that I've included the example dialogues as a block in the Story String, so you should set the chat example behavior set to `Never include examples` on the settings tab if you wish to use my preset. Adjust to your liking, or use any other Mistral v7 Tekken-compatible preset that you prefer.
Prefill (Last Assistant Prefix) can be modified to your liking.
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken No Names</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "none",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken No Names"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": false,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken No Names"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken Prefill</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "[/INST]Great! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken Prefill"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken Prefill"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
<details><summary>SillyTavern JSON - Magnum Mistral v7 Tekken No Names Prefill</summary>
```json
{
"instruct": {
"input_sequence": "[INST]",
"output_sequence": "[/INST]",
"first_output_sequence": "[INST]Let's get started! I'll play the role of {{user}}. Begin by setting the opening scene.[/INST]",
"last_output_sequence": "[/INST]Great! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "",
"wrap": false,
"macro": true,
"activation_regex": "",
"skip_examples": true,
"output_suffix": "</s>",
"input_suffix": "",
"system_sequence": "",
"system_suffix": "",
"user_alignment_message": "",
"system_same_as_user": true,
"last_system_sequence": "",
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "none",
"names_force_groups": true,
"name": "Magnum Mistral v7 Tekken No Names Prefill"
},
"context": {
"story_string": "[SYSTEM_PROMPT]{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>{{/if}}{{trim}}[/SYSTEM_PROMPT]",
"example_separator": "{{noop}}",
"chat_start": "",
"use_stop_strings": false,
"names_as_stop_strings": false,
"always_force_name2": false,
"trim_sentences": false,
"single_line": false,
"name": "Magnum Mistral v7 Tekken No Names Prefill"
},
"sysprompt": {
"name": "Euryale-Magnum",
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"post_history": ""
}
}
```
</details><br>
## Credits
Thank you to [kalomaze](https://huggingface.co/kalomaze) for providing the compute used for training.
Thank you to [ZeroAgency](https://huggingface.co/ZeroAgency) for the text-only model conversion.
Thank you to [PocketDoc](https://huggingface.co/PocketDoc) for the advanced prompt building strategy, as well as [Delta-Vector](https://huggingface.co/Delta-Vector) and [intervitens](https://huggingface.co/intervitens) for helping experiment on it.
Thank you to [Gryphe](https://huggingface.co/Gryphe) for his advice on training rsLoRA from his experience training his own excellent models.
Thank you to [Sao10K](https://huggingface.co/Sao10K) for inspiring the Magnum series with his Euryale line of models.
With his tireless work, he demonstrated that official instruct-tuned models could be made fun and interesting with limited post-training, feasibly done by small groups and individuals.
Thank you to the members of [Anthracite](https://huggingface.co/anthracite-org) for the datasets and support.
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
## Training procedure
[WandB](https://wandb.ai/doctorshotgun/24b-magnum-lora/runs/763psl82?nw=nwuserdoctorshotgun)
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
#base_model_ignore_patterns: "consolidated.safetensors"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: Doctor-Shotgun/magnum-v5-sft-prototype-ms3.1-lora
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-core/magnum-v5-sft-proto-mistral-v7-tekken-rev1-32k
ds_type: parquet
type:
shuffle_merged_datasets: true
dataset_prepared_path: /home/ubuntu/docshotgun/data/magnum-24b-data
val_set_size: 0.0
output_dir: /home/ubuntu/docshotgun/data/24b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 24b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: offload
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use paged_ademamix_8bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0 |
shah-sapna-123/Full.video.18.shah.sapna.Viral.Video.Full.Video.Download.Watch.new.2025.Leaked.Video | shah-sapna-123 | 2025-06-23T19:12:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T19:08:47Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/) |
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-PIFT-enja_10000_2 | Hachipo | 2025-06-23T19:12:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T19:08:58Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmc9carvn00grufjm1yl3krts_cmc9fp4sg0013eihnozmw5ltf | BootesVoid | 2025-06-23T19:09:36Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-23T19:09:32Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: FIRST
---
# Cmc9Carvn00Grufjm1Yl3Krts_Cmc9Fp4Sg0013Eihnozmw5Ltf
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `FIRST` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "FIRST",
"lora_weights": "https://huggingface.co/BootesVoid/cmc9carvn00grufjm1yl3krts_cmc9fp4sg0013eihnozmw5ltf/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc9carvn00grufjm1yl3krts_cmc9fp4sg0013eihnozmw5ltf', weight_name='lora.safetensors')
image = pipeline('FIRST').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc9carvn00grufjm1yl3krts_cmc9fp4sg0013eihnozmw5ltf/discussions) to add images that show off what you’ve made with this LoRA.
|
Huzaifah0/Avery_0.6_3_16 | Huzaifah0 | 2025-06-23T19:09:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:34:55Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glif-loradex-trainer/Angelo-ec24_0rnate | glif-loradex-trainer | 2025-06-23T19:08:58Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2025-06-23T19:08:39Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1750705634796__000001500_0.jpg
text: 0rnate church
- output:
url: samples/1750705659573__000001500_1.jpg
text: 0rnate painting
- output:
url: samples/1750705684376__000001500_2.jpg
text: 0rnate furniture
- output:
url: samples/1750705709176__000001500_3.jpg
text: 0rnate altar
base_model: black-forest-labs/FLUX.1-dev
trigger: "0rnate"
instance_prompt: "0rnate"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# 0rnate
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Angelo-ec24`.
<Gallery />
## Trigger words
You should use `0rnate` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/Angelo-ec24_0rnate/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
voroninip/session-classifier | voroninip | 2025-06-23T19:07:01Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-05T21:42:04Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: session-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# session-classifier
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0067
- Accuracy: 0.7914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 384
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
AkumaDachi/dqn-SpaceInvadersNoFrameskip-v4 | AkumaDachi | 2025-06-23T19:05:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-23T19:05:01Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 508.00 +/- 125.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AkumaDachi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AkumaDachi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AkumaDachi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Hachipo/Meta-Llama-3-8B-MIFT-en_newbase_v2-MIFT-ja_10000_2 | Hachipo | 2025-06-23T19:04:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T19:01:37Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pooya-davoodi-parasail/OmniGen-v1-LoRA-01 | pooya-davoodi-parasail | 2025-06-23T19:00:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-23T19:00:37Z | ---
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF | mradermacher | 2025-06-23T18:58:42Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-22T19:14:29Z | ---
base_model: huihui-ai/Huihui-MoE-23B-A4B-abliterated
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.9 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 13.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-MoE-23B-A4B-abliterated-i1-GGUF/resolve/main/Huihui-MoE-23B-A4B-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 19.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
FiniteInfinity99/Thesis_gemma-2-9b_final_model_updated | FiniteInfinity99 | 2025-06-23T18:53:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:53:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-one_liners-comb-1-seed-28-2025-06-23 | morturr | 2025-06-23T18:51:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-23T18:51:49Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-one_liners-comb-1-seed-28-2025-06-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-one_liners-comb-1-seed-28-2025-06-23
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
alvanalrakib/Qwen3-4B-Reasoning-Lyrics | alvanalrakib | 2025-06-23T18:49:46Z | 0 | 0 | null | [
"gguf",
"music",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:19:18Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B
tags:
- music
---
# 💻 Qwen3-4B Lyrics Creation Model (GGUF F16)
<div align="center">





**GGUF F16 format of the Qwen3-4B Lyrics Creation model for high-quality local inference**
[🤗 Original Model](https://huggingface.co/alvanalrakib/qwen3-4b-reasoning-merge) • [🔧 llama.cpp](https://github.com/ggerganov/llama.cpp) • [🦙 Ollama](https://ollama.ai/)
</div>
---
## 🌟 **Overview**
This repository contains the **GGUF F16 format** of the Qwen3-4B Lyrics Creation model, optimized for:
- 🎵 **High-quality local inference** with llama.cpp
- 🎤 **Professional songwriting** applications
- 💻 **Offline lyrics generation**
- 📱 **Local creative tools**
- 🎶 **Step-by-step lyric development**
## 📊 **Source Model Training Performance**
### 🏆 **Training Results**
| Metric | Value | Achievement |
|--------|-------|-------------|
| **Initial Loss** | 2.97 | Baseline |
| **Final Eval Loss** | 1.37 | **54% reduction** |
| **Final Train Loss** | 1.43 | **52% reduction** |
| **Training Steps** | 1,000 | Testing configuration |
| **Convergence** | Excellent | Stable learning curve |
### ⏱️ **Training Efficiency**
- **Total Training Time**: 56 minutes 54 seconds
- **Hardware**: NVIDIA A100 40GB
- **Memory Usage**: 26.8GB VRAM (67% utilization)
- **Trainable Parameters**: 66.06M (1.62% of total)
- **Dataset**: 3,500 high-quality lyrics examples
### 📈 **Loss Progression**
- **Rapid Learning**: Steps 0-100 (Major improvement)
- **Pattern Mastery**: Steps 100-400 (Continued optimization)
- **Fine Convergence**: Steps 400-600 (Stability achieved)
- **Final Polish**: Steps 600-1000 (Completion)
## 🔧 **GGUF Model Specifications**
### 📁 **File Information**
| Parameter | Value |
|-----------|-------|
| **Format** | GGUF F16 (Full Precision) |
| **File Size** | 8.1 GB |
| **Quantization** | None (F16 maintains full model quality) |
| **Compatibility** | llama.cpp, Ollama, LM Studio, etc. |
| **Quality** | Maximum (no quantization loss) |
### 🎯 **Model Architecture**
| Specification | Details |
|---------------|---------|
| **Base Model** | Qwen3-4B |
| **Total Parameters** | ~4.09B |
| **Precision** | F16 (16-bit floating point) |
| **Context Length** | 32,768 tokens |
| **Vocabulary Size** | 151,936 tokens |
| **Architecture** | Transformer with RMSNorm |
### ⚙️ **Training Configuration Used**
```yaml
# Source model was trained with:
adapter: lora
lora_r: 32
lora_alpha: 64
max_steps: 1000
learning_rate: 0.0003
micro_batch_size: 4
gradient_accumulation_steps: 2
sequence_len: 4096
temperature: 0.6 # Optimal for lyrics generation
```
## 🚀 **Quick Start**
### Using with Ollama
```bash
# Create Modelfile for lyrics generation
cat > Modelfile << 'EOF'
FROM ./qwen3-4b-lyrics-f16.gguf
TEMPLATE """<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER temperature 0.6
PARAMETER top_p 0.95
PARAMETER top_k 20
PARAMETER stop "<|im_end|>"
EOF
# Create and run the model
ollama create qwen3-lyrics -f Modelfile
ollama run qwen3-lyrics
```
### Using with llama.cpp
```bash
# Download model
git clone https://huggingface.co/alvanalrakib/Qwen3-4B-Reasoning-Lyrics
cd Qwen3-4B-Reasoning-Lyrics
# Run with optimal settings for lyrics
./llama-cli -m qwen3-4b-lyrics-f16.gguf \
--temp 0.6 \
--top-p 0.95 \
--top-k 20 \
--ctx-size 4096 \
--prompt "Write a heartfelt song about friendship"
```
### Python Integration
```python
from llama_cpp import Llama
# Load F16 model for maximum quality
model = Llama(
model_path="qwen3-4b-lyrics-f16.gguf",
n_ctx=4096,
f16_kv=True, # Use F16 for key-value cache
temperature=0.6,
top_p=0.95,
top_k=20
)
# Generate lyrics with high quality
response = model(
"Create a country song about hometown memories",
max_tokens=2048,
stop=["<|im_end|>"],
echo=False
)
```
## ⚙️ **Optimal Generation Settings**
### For Lyrics Creation (Recommended)
```
Temperature: 0.6
Top-P: 0.95
Top-K: 20
Min-P: 0.0
Context: 4096 tokens
Repetition Penalty: 1.0-1.1
```
### For Creative Experimentation
```
Temperature: 0.7-0.8
Top-P: 0.9
Top-K: 25
Context: 2048-4096 tokens
```
## 🎯 **Specialization: Lyrics Creation**
### 📝 **Core Capabilities**
- **Step-by-step songwriting** with visible creative process
- **Genre-specific writing** (Pop, Rock, Country, R&B, etc.)
- **Song structure planning** (Verse, Chorus, Bridge)
- **Emotional storytelling** through lyrics
- **Rhyme scheme development** and flow optimization
### 🎵 **Supported Formats**
- **Verses**: Narrative and story development
- **Chorus**: Catchy hooks and main messages
- **Bridge**: Emotional climax or perspective shift
- **Pre-Chorus**: Building tension and anticipation
- **Outro**: Resolution and final thoughts
## 🏆 **Performance Benchmarks**
### 💻 **Hardware Performance (F16)**
| Device | Speed | Memory | Quality |
|--------|-------|--------|---------|
| **Apple M1 Pro** | ~6-8 tok/s | ~10GB RAM | Maximum |
| **Apple M2 Max** | ~10-12 tok/s | ~12GB RAM | Maximum |
| **Intel i7 + RTX 3070** | ~12-15 tok/s | ~10GB VRAM | Maximum |
| **Intel i9 + RTX 4080** | ~18-22 tok/s | ~12GB VRAM | Maximum |
| **RTX 4090** | ~25-30 tok/s | ~12GB VRAM | Maximum |
### 📊 **Quality Comparison**
- **F16 (This Model)**: 100% original quality, 8.1GB
- **Q8_0**: ~99% quality, ~4.3GB
- **Q4_K_M**: ~95% quality, ~2.4GB
- **Q4_0**: ~90% quality, ~2.2GB
## 🔗 **Compatible Software**
### 🛠️ **Inference Engines**
- **[llama.cpp](https://github.com/ggerganov/llama.cpp)** - Original implementation
- **[Ollama](https://ollama.ai/)** - Easy model management
- **[LM Studio](https://lmstudio.ai/)** - GUI interface
- **[GPT4All](https://gpt4all.io/)** - Cross-platform client
- **[llama-cpp-python](https://github.com/abetlen/llama-cpp-python)** - Python bindings
### 🎵 **Music Software Integration**
- **Custom songwriting apps** via API
- **Digital Audio Workstations** (with plugins)
- **Web-based lyric generators**
- **Mobile songwriting applications**
## 📋 **Dataset & Training Background**
### 📊 **Training Dataset**
- **Type**: High-quality lyrics creation dataset (private)
- **Size**: 3,500 curated examples
- **Format**: Chat template with step-by-step reasoning
- **Specialization**: Focused on lyrics and songwriting
- **Quality**: Manually curated for creative writing
### 🔧 **Training Process**
- **Method**: LoRA fine-tuning on Qwen3-4B
- **Steps**: 1,000 (testing configuration)
- **Framework**: Axolotl on A100 40GB
- **Loss Reduction**: 54% improvement
- **Convergence**: Stable and healthy
### 📈 **Model Improvements**
- **Lyrics Structure**: Enhanced verse/chorus organization
- **Creative Process**: Step-by-step thinking visible
- **Genre Awareness**: Better style adaptation
- **Emotional Depth**: Improved storytelling ability
## 🙏 **Credits**
- **Original Model**: [Qwen Team](https://huggingface.co/Qwen/Qwen3-4B) at Alibaba Cloud
- **Fine-tuning Framework**: [Axolotl](https://github.com/axolotl-ai-cloud/axolotl) Community
- **GGUF Format**: [llama.cpp](https://github.com/ggerganov/llama.cpp) by Georgi Gerganov
- **Transformers Model**: [alvanalrakib/qwen3-4b-reasoning-merge](https://huggingface.co/alvanalrakib/qwen3-4b-reasoning-merge)
## 📄 **License**
Apache 2.0 License - Same as original Qwen3 model
## 💡 **Why F16 Format?**
- **Maximum Quality**: No quantization loss preserves all training improvements
- **Professional Use**: Ideal for commercial songwriting applications
- **Future-Proof**: Maintains full model capabilities for advanced use cases
- **Research**: Perfect for studying the model's creative process
---
<div align="center">
**🎵 F16 Quality • Professional Songwriting • Local & Private**
*Maximum quality GGUF version for serious lyrics creation*
[🎤 Original Model](https://huggingface.co/alvanalrakib/qwen3-4b-reasoning-merge) • [💻 Download F16 GGUF](https://huggingface.co/alvanalrakib/Qwen3-4B-Reasoning-Lyrics)
</div> |
KNdoschile/alpersman | KNdoschile | 2025-06-23T18:48:50Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-23T17:41:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
mattmurphy/Qwen2.5-0.5B-GRPO-test | mattmurphy | 2025-06-23T18:48:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:28:24Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-GRPO-test
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mattmurphy/Qwen2.5-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TOMFORD79/boom9 | TOMFORD79 | 2025-06-23T18:47:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-23T18:42:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Official-Link-mezzo-fun-18-Viral-videos-XX/Official.VIDEO.mezzo.fun.Viral.Video.Tutorial | Official-Link-mezzo-fun-18-Viral-videos-XX | 2025-06-23T18:45:22Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-23T18:44:28Z | 18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
18 seconds ago
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=mezzo+fun" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=mezzo+fun"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Mezzo Fun Viral Video: What Everyone Needs to Know About Online Ethics and Privacy
Mezzo fun viral video warnings highlight the importance of online ethics, privacy, and responsibility. Discover why watching such content is...
Mezzo Fun Full Original Video Goes Viral On Twitter/X And Reddit
Across the course of the last two days, a video titled mezzo fun has been trending on Google and social media platforms.
|
NICOPOI-9/segformer-b5-finetuned-morphpadver1-hgo-coord-v7_mix | NICOPOI-9 | 2025-06-23T18:41:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2025-06-23T15:40:58Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b5
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-finetuned-morphpadver1-hgo-coord-v7_mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-finetuned-morphpadver1-hgo-coord-v7_mix
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the NICOPOI-9/morphpad_coord_hgo_512_4class_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0252
- Mean Iou: 0.9929
- Mean Accuracy: 0.9964
- Overall Accuracy: 0.9964
- Accuracy 0-0: 0.9964
- Accuracy 0-90: 0.9977
- Accuracy 90-0: 0.9952
- Accuracy 90-90: 0.9962
- Iou 0-0: 0.9940
- Iou 0-90: 0.9938
- Iou 90-0: 0.9903
- Iou 90-90: 0.9936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 0-0 | Accuracy 0-90 | Accuracy 90-0 | Accuracy 90-90 | Iou 0-0 | Iou 0-90 | Iou 90-0 | Iou 90-90 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:-------------:|:-------------:|:--------------:|:-------:|:--------:|:--------:|:---------:|
| 1.0592 | 1.3638 | 4000 | 1.0170 | 0.3508 | 0.5140 | 0.5289 | 0.3683 | 0.7216 | 0.6097 | 0.3563 | 0.3302 | 0.3825 | 0.3737 | 0.3169 |
| 0.721 | 2.7276 | 8000 | 0.5930 | 0.6031 | 0.7460 | 0.7502 | 0.6848 | 0.7925 | 0.7840 | 0.7227 | 0.6302 | 0.5834 | 0.5940 | 0.6049 |
| 0.29 | 4.0914 | 12000 | 0.3953 | 0.7256 | 0.8415 | 0.8408 | 0.8628 | 0.8237 | 0.8461 | 0.8335 | 0.7154 | 0.7324 | 0.7159 | 0.7387 |
| 0.2193 | 5.4552 | 16000 | 0.3058 | 0.7892 | 0.8811 | 0.8818 | 0.8586 | 0.8839 | 0.8933 | 0.8886 | 0.7896 | 0.7973 | 0.7726 | 0.7972 |
| 0.2548 | 6.8190 | 20000 | 0.2064 | 0.8644 | 0.9267 | 0.9268 | 0.9283 | 0.9250 | 0.9304 | 0.9232 | 0.8708 | 0.8602 | 0.8539 | 0.8726 |
| 0.1537 | 8.1827 | 24000 | 0.1766 | 0.8894 | 0.9406 | 0.9413 | 0.9321 | 0.9447 | 0.9511 | 0.9347 | 0.8805 | 0.8806 | 0.8949 | 0.9016 |
| 0.1259 | 9.5465 | 28000 | 0.1421 | 0.9240 | 0.9605 | 0.9602 | 0.9644 | 0.9561 | 0.9593 | 0.9621 | 0.9334 | 0.9180 | 0.9183 | 0.9265 |
| 0.0919 | 10.9103 | 32000 | 0.1213 | 0.9359 | 0.9673 | 0.9668 | 0.9708 | 0.9672 | 0.9563 | 0.9750 | 0.9298 | 0.9389 | 0.9293 | 0.9456 |
| 0.0416 | 12.2741 | 36000 | 0.0820 | 0.9569 | 0.9782 | 0.9778 | 0.9817 | 0.9709 | 0.9783 | 0.9818 | 0.9569 | 0.9530 | 0.9530 | 0.9649 |
| 0.0618 | 13.6379 | 40000 | 0.0742 | 0.9636 | 0.9815 | 0.9814 | 0.9845 | 0.9793 | 0.9811 | 0.9813 | 0.9600 | 0.9663 | 0.9590 | 0.9689 |
| 0.0553 | 15.0017 | 44000 | 0.0706 | 0.9699 | 0.9848 | 0.9847 | 0.9843 | 0.9836 | 0.9848 | 0.9863 | 0.9619 | 0.9708 | 0.9688 | 0.9781 |
| 0.0451 | 16.3655 | 48000 | 0.0789 | 0.9724 | 0.9863 | 0.9860 | 0.9918 | 0.9816 | 0.9850 | 0.9869 | 0.9671 | 0.9735 | 0.9706 | 0.9786 |
| 0.0123 | 17.7293 | 52000 | 0.0733 | 0.9746 | 0.9874 | 0.9871 | 0.9923 | 0.9834 | 0.9861 | 0.9876 | 0.9706 | 0.9735 | 0.9741 | 0.9803 |
| 0.0255 | 19.0931 | 56000 | 0.0400 | 0.9831 | 0.9916 | 0.9914 | 0.9919 | 0.9872 | 0.9927 | 0.9946 | 0.9829 | 0.9808 | 0.9820 | 0.9869 |
| 0.0124 | 20.4569 | 60000 | 0.0584 | 0.9830 | 0.9915 | 0.9914 | 0.9937 | 0.9947 | 0.9867 | 0.9908 | 0.9810 | 0.9845 | 0.9796 | 0.9870 |
| 20.9459 | 21.8207 | 64000 | 0.0300 | 0.9884 | 0.9942 | 0.9941 | 0.9963 | 0.9921 | 0.9939 | 0.9946 | 0.9914 | 0.9875 | 0.9873 | 0.9874 |
| 0.0036 | 23.1845 | 68000 | 0.0467 | 0.9836 | 0.9918 | 0.9917 | 0.9941 | 0.9850 | 0.9954 | 0.9928 | 0.9893 | 0.9802 | 0.9857 | 0.9789 |
| 0.0311 | 24.5482 | 72000 | 0.0926 | 0.9830 | 0.9918 | 0.9914 | 0.9961 | 0.9907 | 0.9853 | 0.9949 | 0.9839 | 0.9857 | 0.9793 | 0.9832 |
| 0.0564 | 25.9120 | 76000 | 0.0461 | 0.9900 | 0.9950 | 0.9950 | 0.9957 | 0.9937 | 0.9955 | 0.9952 | 0.9925 | 0.9907 | 0.9887 | 0.9883 |
| 0.0064 | 27.2758 | 80000 | 0.0458 | 0.9888 | 0.9945 | 0.9944 | 0.9958 | 0.9961 | 0.9908 | 0.9952 | 0.9869 | 0.9915 | 0.9853 | 0.9916 |
| 0.0023 | 28.6396 | 84000 | 0.0252 | 0.9929 | 0.9964 | 0.9964 | 0.9964 | 0.9977 | 0.9952 | 0.9962 | 0.9940 | 0.9938 | 0.9903 | 0.9936 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
19uez/llama3_2_3B_128_005_5k_GRPO_GGUF | 19uez | 2025-06-23T18:41:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:19uez/llama3_2_3B_128_005_5k_GRPO_full_model",
"base_model:quantized:19uez/llama3_2_3B_128_005_5k_GRPO_full_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T18:39:42Z | ---
base_model: 19uez/llama3_2_3B_128_005_5k_GRPO_full_model
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 19uez
- **License:** apache-2.0
- **Finetuned from model :** 19uez/llama3_2_3B_128_005_5k_GRPO_full_model
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
creaciones-pulso/metastyle_dpo_unsloth-Meta-Llama-3.1-8B-Instruct-bnb-4bit_8_3_0.0001_16_0.05 | creaciones-pulso | 2025-06-23T18:40:21Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T22:04:48Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** creaciones-pulso
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits