modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 06:28:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
IndianSanga/ner-model | IndianSanga | 2025-05-31T13:22:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-31T13:20:21Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: ner-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-model
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
FLOPS-Squared/KeystoneFuse-Baseline-Epoch-5-Instruct-Flax | FLOPS-Squared | 2025-05-31T13:22:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T11:19:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yukiyounai/Jailbreak-R1 | yukiyounai | 2025-05-31T13:21:23Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"legal",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-22T12:00:39Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- legal
---
dfsdsssssssssssssssssssssssssssss
ssssssssssssssssssssssssssssssssss
ssssssssssssssssssssssssssss |
Whitley7/distilbert-sarcasm-detection | Whitley7 | 2025-05-31T13:20:59Z | 0 | 1 | null | [
"safetensors",
"roberta",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T03:46:17Z | ---
license: apache-2.0
---
|
georgeiac00/dpo_sft_model_32_edge_3_ep | georgeiac00 | 2025-05-31T13:19:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T13:18:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eipyae/eipyae | eipyae | 2025-05-31T13:16:54Z | 0 | 0 | null | [
"av",
"as",
"dataset:disco-eth/EuroSpeech",
"license:bsl-1.0",
"region:us"
] | null | 2025-05-31T13:16:24Z | ---
license: bsl-1.0
datasets:
- disco-eth/EuroSpeech
language:
- av
- as
--- |
Fuscosucof/fusco_alzheimerMRI_model | Fuscosucof | 2025-05-31T13:12:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-30T11:00:31Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fusco_alzheimerMRI_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fusco_alzheimerMRI_model
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0434
- Accuracy: 0.9902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8393 | 1.0 | 72 | 0.9099 | 0.5703 |
| 0.6828 | 2.0 | 144 | 0.7748 | 0.6914 |
| 0.5156 | 3.0 | 216 | 0.5721 | 0.7637 |
| 0.3141 | 4.0 | 288 | 0.2799 | 0.9082 |
| 0.2024 | 5.0 | 360 | 0.5322 | 0.8066 |
| 0.0934 | 6.0 | 432 | 0.6212 | 0.8555 |
| 0.0307 | 7.0 | 504 | 0.1234 | 0.9551 |
| 0.0383 | 8.0 | 576 | 0.1279 | 0.9570 |
| 0.0195 | 9.0 | 648 | 0.1022 | 0.9746 |
| 0.0102 | 10.0 | 720 | 0.1057 | 0.9629 |
| 0.0081 | 11.0 | 792 | 0.0785 | 0.9766 |
| 0.0009 | 12.0 | 864 | 0.0641 | 0.9805 |
| 0.0023 | 13.0 | 936 | 0.0612 | 0.9824 |
| 0.0089 | 14.0 | 1008 | 0.0571 | 0.9746 |
| 0.0008 | 15.0 | 1080 | 0.0437 | 0.9902 |
| 0.0014 | 16.0 | 1152 | 0.0375 | 0.9883 |
| 0.0002 | 17.0 | 1224 | 0.0485 | 0.9824 |
| 0.0002 | 18.0 | 1296 | 0.0450 | 0.9863 |
| 0.0002 | 19.0 | 1368 | 0.0451 | 0.9863 |
| 0.0002 | 20.0 | 1440 | 0.0434 | 0.9902 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
lacos03/std-1.5-lora-midjourney-1.0 | lacos03 | 2025-05-31T13:11:11Z | 0 | 1 | diffusers | [
"diffusers",
"stable-diffusion",
"lora",
"t2i",
"midjourney",
"style",
"art",
"safetensors",
"text-to-image",
"en",
"dataset:MohamedRashad/midjourney-detailed-prompts",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-30T15:20:04Z | ---
license: creativeml-openrail-m
datasets:
- MohamedRashad/midjourney-detailed-prompts
language:
- en
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
pipeline_tag: text-to-image
library_name: diffusers
tags:
- stable-diffusion
- lora
- t2i
- midjourney
- style
- art
- safetensors
--- |
Mehrm/mehr | Mehrm | 2025-05-31T13:05:29Z | 0 | 1 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-31T13:05:29Z | ---
license: bigscience-openrail-m
---
|
Snarcy/mit-b5_train_002 | Snarcy | 2025-05-31T13:02:45Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T20:27:10Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b5
tags:
- generated_from_trainer
model-index:
- name: mit-b5_train_002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b5_train_002
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0193
- Mean Iou: 0.8074
- Mean Accuracy: 0.9523
- Overall Accuracy: 0.9939
- Per Category Iou: [0.993861306152736, 0.6209314583219739]
- Per Category Accuracy: [0.9948525401842638, 0.9098462857018345]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:|
| 0.006 | 1.3021 | 500 | 0.0210 | 0.7686 | 0.9519 | 0.9917 | [0.9916161135468804, 0.5456785039519645] | [0.9925910583643316, 0.911127036429974] |
| 0.0042 | 2.6042 | 1000 | 0.0207 | 0.7737 | 0.9539 | 0.9920 | [0.991908285445925, 0.5555062317087333] | [0.9928413818014432, 0.9149668675355112] |
| 0.0043 | 3.9062 | 1500 | 0.0203 | 0.7762 | 0.9560 | 0.9921 | [0.9920317877519694, 0.5604237434509759] | [0.9929195911355291, 0.9191044913434324] |
| 0.003 | 5.2083 | 2000 | 0.0187 | 0.7909 | 0.9491 | 0.9931 | [0.9930237925441819, 0.5888398995824345] | [0.9940777053124252, 0.904064749333598] |
| 0.0023 | 6.5104 | 2500 | 0.0204 | 0.7820 | 0.9537 | 0.9925 | [0.9924293920252416, 0.5716014051664092] | [0.9933741427138751, 0.9139500144054193] |
| 0.0029 | 7.8125 | 3000 | 0.0201 | 0.7920 | 0.9500 | 0.9931 | [0.9930661864964873, 0.590834237246924] | [0.9940989986071507, 0.905989507044129] |
| 0.0028 | 9.1146 | 3500 | 0.0298 | 0.7545 | 0.9719 | 0.9903 | [0.9902257223969061, 0.5187995787903269] | [0.9907400148442388, 0.9530528594152126] |
| 0.0024 | 10.4167 | 4000 | 0.0234 | 0.7788 | 0.9533 | 0.9923 | [0.9922408901388992, 0.5654456381813304] | [0.993190431130829, 0.9134972726546403] |
| 0.0023 | 11.7188 | 4500 | 0.0200 | 0.7915 | 0.9584 | 0.9930 | [0.9929147999583595, 0.5901749476024036] | [0.9937597790125627, 0.9230750607085529] |
| 0.0028 | 13.0208 | 5000 | 0.0191 | 0.8033 | 0.9574 | 0.9936 | [0.9935777693357606, 0.612975535809318] | [0.9944543421808892, 0.9202520827331075] |
| 0.0023 | 14.3229 | 5500 | 0.0189 | 0.8042 | 0.9487 | 0.9938 | [0.9937456877375658, 0.614589670664853] | [0.9948176673921475, 0.9024910480608369] |
| 0.0023 | 15.625 | 6000 | 0.0204 | 0.8001 | 0.9564 | 0.9935 | [0.9934206286987506, 0.6067556263565074] | [0.994316752677739, 0.918460484361041] |
| 0.0018 | 16.9271 | 6500 | 0.0182 | 0.8121 | 0.9502 | 0.9942 | [0.9941248990263425, 0.6300048865149038] | [0.9951673327539593, 0.9052147618021543] |
| 0.0014 | 18.2292 | 7000 | 0.0208 | 0.8021 | 0.9602 | 0.9935 | [0.993474511729115, 0.6106912886801219] | [0.9942873973934382, 0.9260384612591063] |
| 0.0027 | 19.5312 | 7500 | 0.0193 | 0.8074 | 0.9523 | 0.9939 | [0.993861306152736, 0.6209314583219739] | [0.9948525401842638, 0.9098462857018345] |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbc7oy0g0c8a85uuzo4fjkmf | BootesVoid | 2025-05-31T12:59:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T12:58:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ava_hanamiya
---
# Cmbbs6Hps09U485Uuwc477Fb4_Cmbc7Oy0G0C8A85Uuzo4Fjkmf
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ava_hanamiya` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ava_hanamiya",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbc7oy0g0c8a85uuzo4fjkmf/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbc7oy0g0c8a85uuzo4fjkmf', weight_name='lora.safetensors')
image = pipeline('ava_hanamiya').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbc7oy0g0c8a85uuzo4fjkmf/discussions) to add images that show off what you’ve made with this LoRA.
|
yazodi/keyword-tfidf | yazodi | 2025-05-31T12:57:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T12:56:09Z | # 🔑 Anahtar Kelime Çıkarıcı (TF-IDF ile)
Bu projede, kullanıcıdan alınan metin üzerinde **TF-IDF algoritması** kullanılarak en anlamlı anahtar kelimeler çıkarılmaktadır.
## 📦 Kullanılan Kütüphaneler
- `scikit-learn`
- `nltk`
- `pandas`
- `streamlit`
## 🚀 Uygulama Nasıl Çalışır?
Kullanıcı metnini girer → TF-IDF uygulanır → En yüksek skora sahip 10 kelime gösterilir.
## 🧪 Örnek Giriş
```text
Yapay zeka ve doğal dil işleme teknolojileri günümüzde birçok sektörde kullanılmaktadır.
📌 Çıktı
the 0.44014589420436356
and 0.34233569549228277
python 0.326033995706936
to 0.2934305961362424
of 0.24452549678020197
in 0.22822379699485518
is 0.2119220972095084
data 0.19562039742416157
you 0.1467152980681212
that 0.13041359828277438
🖥 Uygulamayı Başlatmak
Eğitimli TF-IDF vektörizer modeli paylaşılabilir:
👉 https://huggingface.co/yazodi/keyword-tfidf
🪪 Lisans
MIT Lisansı
---
|
sizzlebop/Audio-Reasoner-Q8_0-GGUF | sizzlebop | 2025-05-31T12:55:40Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:zhifeixie/Audio-Reasoner",
"base_model:quantized:zhifeixie/Audio-Reasoner",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T12:55:07Z | ---
license: mit
tags:
- llama-cpp
- gguf-my-repo
base_model: zhifeixie/Audio-Reasoner
---
# sizzlebop/Audio-Reasoner-Q8_0-GGUF
This model was converted to GGUF format from [`zhifeixie/Audio-Reasoner`](https://huggingface.co/zhifeixie/Audio-Reasoner) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zhifeixie/Audio-Reasoner) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sizzlebop/Audio-Reasoner-Q8_0-GGUF --hf-file audio-reasoner-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sizzlebop/Audio-Reasoner-Q8_0-GGUF --hf-file audio-reasoner-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sizzlebop/Audio-Reasoner-Q8_0-GGUF --hf-file audio-reasoner-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sizzlebop/Audio-Reasoner-Q8_0-GGUF --hf-file audio-reasoner-q8_0.gguf -c 2048
```
|
igorcouto/sofya-telephony-pt-500h-2 | igorcouto | 2025-05-31T12:51:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-31T12:50:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOTORONG/Qwen3_Lora_250531 | TOTORONG | 2025-05-31T12:49:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-32B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T12:48:32Z | ---
base_model: unsloth/Qwen3-32B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KrystalLi/Qwen2-0.5B-GRPO-test | KrystalLi | 2025-05-31T12:39:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:08:10Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KrystalLi/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LinaSad/mcqa_all_merged_on_letter_444 | LinaSad | 2025-05-31T12:35:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T12:34:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tokyotech-llm/Llama-3.3-Swallow-70B-v0.4 | tokyotech-llm | 2025-05-31T12:29:53Z | 474 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ja",
"arxiv:2404.17733",
"arxiv:2505.02881",
"arxiv:2407.21783",
"license:llama3.3",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-17T11:42:28Z | ---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license:
- llama3.3
- gemma
model_type: llama
---
# Llama 3.3 Swallow - Built with Llama
Llama 3.3 Swallow is a large language model (70B) that was built by continual pre-training on the [Meta Llama 3.3](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) model.
Llama 3.3 Swallow enhanced the Japanese language capabilities of the original Llama 3.3 while retaining the English language capabilities.
We use approximately 315 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and
coding contents, etc (see the Training Datasets section of the base model) for continual pre-training.
The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
See the Swallow Model Index section to find other model variants.
# Release History
- **March 10, 2025**: Released [Llama-3.3-Swallow-70B-Instruct-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4) and [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4).
- **December 30, 2024**: Released [Llama-3.1-Swallow-70B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3).
- **December 23, 2024**: Released [Llama-3.1-Swallow-8B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3).
- **November 11, 2024**: Released [Llama-3.1-Swallow-8B-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) and [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2).
- **October 08, 2024**: Released [Llama-3.1-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1), [Llama-3.1-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1), [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1), and [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1).
## Swallow Model Index
|Model|Llama-3.1-Swallow v0.1|Llama-3.1-Swallow-Instruct v0.1|Llama-3.1-Swallow v0.2|Llama-3.1-Swallow-Instruct v0.2|Llama-3.1-Swallow-Instruct v0.3|Llama-3.3-Swallow v0.4|Llama-3.3-Swallow-Instruct v0.4|
|---|---|---|---|---|---|---|---|
|8B| [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3) | | |
|70B| [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1) | | | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4) |

The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/index.en.html) provides large language models developed by the Swallow team.
## Model Details
* **Model type**: Please refer to [Llama 3.1 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
* **Tokenizer**: Please refer to [Llama 3.1 blog](https://ai.meta.com/blog/meta-llama-3-1) for details on the tokenizer.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Model Performance
### Japanese tasks
|Model|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
| |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
| Qwen2-72B | 0.960 | 0.620 | 0.561 | 0.926 | 0.238 | 0.768 | 0.275 | 0.241 | 0.782 | 0.561 | 0.593 |
| Qwen2.5-72B | **0.972** | 0.611 | 0.619 | **0.930** | 0.279 | **0.828** | 0.287 | 0.252 | **0.804** | **0.648** | 0.623 |
| Sarashina2-70B | 0.929 | **0.717** | 0.668 | 0.929 | 0.190 | 0.488 | 0.313 | 0.243 | 0.592 | 0.235 | 0.530 |
| Llama 3 70B | 0.946 | 0.606 | 0.589 | 0.922 | 0.228 | 0.664 | 0.286 | 0.252 | 0.705 | 0.491 | 0.569 |
| Llama 3.1 70B | 0.946 | 0.616 | 0.603 | 0.925 | 0.228 | 0.672 | 0.287 | 0.257 | 0.669 | 0.462 | 0.566 |
| Llama 3 Youko 70B | 0.946 | 0.602 | 0.610 | 0.923 | 0.242 | 0.684 | 0.292 | 0.250 | 0.704 | 0.463 | 0.571 |
| Llama 3 Swallow 70B | 0.968 | 0.675 | 0.684 | 0.923 | 0.239 | 0.708 | 0.307 | 0.255 | 0.706 | 0.477 | 0.594 |
| Llama 3.1 Swallow 70B | 0.955 | 0.645 | 0.678 | 0.923 | 0.272 | 0.684 | 0.320 | 0.259 | 0.709 | 0.487 | 0.593 |
| **Llama 3.3 Swallow 70B v0.4** | 0.967 | 0.671 | **0.732** | 0.924 | **0.283** | 0.776 | **0.327** | **0.260** | 0.742 | 0.604 | **0.629** |
### English tasks
|Model|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|MATH|BBH|HumanEval|En Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|4-shot|3-shot|0-shot| |
| |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|CoT EM Acc|pass@1| |
| Qwen2-72B | 0.418 | 0.790 | 0.677 | 0.673 | 0.915 | 0.842 | **0.893** | 0.560 | 0.643 | 0.608 | 0.702 |
| Qwen2.5-72B | 0.416 | 0.760 | 0.685 | **0.693** | 0.901 | **0.861** | 0.870 | **0.626** | 0.727 | 0.554 | 0.709 |
| Sarashina2-70B | 0.388 | 0.537 | 0.628 | 0.675 | 0.917 | 0.630 | 0.011 | 0.206 | 0.639 | 0.281 | 0.491 |
| Llama 3 70B | 0.440 | 0.826 | **0.690** | 0.618 | 0.920 | 0.787 | 0.801 | 0.446 | **0.829** | 0.527 | 0.689 |
| Llama 3.1 70B | **0.450** | **0.829** | **0.690** | 0.605 | 0.920 | 0.786 | 0.798 | 0.434 | 0.655 | 0.546 | 0.671 |
| Llama 3 Youko 70B | 0.436 | **0.829** | **0.690** | 0.610 | 0.922 | 0.785 | 0.797 | 0.408 | 0.826 | 0.412 | 0.671 |
| Llama 3 Swallow 70B | 0.430 | 0.823 | 0.682 | 0.628 | 0.923 | 0.774 | 0.817 | 0.414 | 0.734 | 0.499 | 0.672 |
| Llama 3.1 Swallow 70B v0.1 | 0.428 | 0.826 | **0.690** | 0.612 | **0.927** | 0.772 | 0.809 | 0.380 | 0.806 | 0.540 | 0.679 |
| **Llama 3.1 Swallow 70B v0.4** | 0.424 | 0.817 | 0.683 | 0.641 | 0.920 | 0.802 | 0.863 | 0.496 | 0.754 | **0.709** | **0.711** |
## Evaluation Benchmarks
The evaluation script can be found at [swallow-llm/swallow-evaluation](https://github.com/swallow-llm/swallow-evaluation), tagged as `v202411`.
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
- Open-ended question answering (JEMHopQA [Ishii et al., 2024])
- Open-ended question answering (NIILC [関根, 2003])
- Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
- Automatic summarization (XL-Sum [Hasan et al., 2021])
- Machine translation (WMT2020 ja-en [Barrault et al., 2020])
- Machine translation (WMT2020 en-ja [Barrault et al., 2020])
- Mathematical reasoning (MGSM [Shi et al., 2023])
- Academic exams (JMMLU [尹ら, 2024])
- Code generation (JHumanEval [佐藤ら, 2024])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
- Open-ended question answering (TriviaQA [Joshi et al., 2017])
- Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
- Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers et al., 2019])
- Mathematical reasoning (GSM8K [Cobbe et al., 2021])
- Mathematical reasoning (MATH [Hendrycks et al., 2022][Lightman et al., 2024])
- Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
- Academic exams (MMLU [Hendrycks et al., 2021])
- Code generation (HumanEval [Chen et al., 2021])
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
- [Dclm-baseline-1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0)
- [English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [FineMath-4+ ](https://huggingface.co/datasets/HuggingFaceTB/finemath)
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Laboro ParaCorpus](https://github.com/laboroai/Laboro-ParaCorpus)
- [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733) (filtered using [Swallow Education Classifier(Wiki-based)](https://huggingface.co/tokyotech-llm/edu-classifier))
- [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733) (filtered using [Swallow Education Classifier](https://huggingface.co/tokyotech-llm/edu-classifier))
- [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733) (synthetic QA-format)
- Swallow Code Version 0.3 (filtering from [The Stack v2 train smol ids](https://huggingface.co/datasets/bigcode/the-stack-v2-train-smol-ids) and then refactoring with Llama-3.3-70B-Instruct)
### Swallow Corpus Version 2
We built the Swallow Corpus by extracting high-quality Japanese texts from Common Crawl. In Version 2, we expanded the scope of the Common Crawl collection and modified the pipeline sequence to enable more flexible quality filtering.
For Llama 3.1 Swallow v0.2, we further refined our quality filtering and data sampling strategies, resulting in an even higher-quality selection of Japanese texts for pre-training.
For Llama 3.3 Swallow 70B v0.4, we generated synthetic QA-format text by using Gemma 2 27B IT to paraphrase educational web documents from our corpus
Further details of the methodology and analysis will be provided in a forthcoming paper.
### Swallow Code Version 0.3
We built the Swallow Code Version 0.3 by filtering from the stack v2 train smol ids and then refactoring with Llama-3.3-70B-Instruct.
In filtering, we removed the code texts with syntax errors or scored below seven by pylint. We have already released the filtered version as [Swallow Code Version 0.1](https://huggingface.co/datasets/tokyotech-llm/swallow-code-v0.1).
In refactoring, we gave a prompt to Llama-3.3-70B-Instruct to follow [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) and coding best practices.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 3.3 under a generous open license.
We would like to thank Amazon Web Services (AWS) for providing access to SageMaker HyperPod, which enabled the training of the Llama 3.3 Swallow project.
We received various supports including:
+ AIST project: "Research and Development of Foundation Models for Generative AI in the Physical Domain"
+ NEDO project: "Development of Artificial Intelligence Application Technology to Support Judgment in Design Risk Assessment Work Based on the Perspective of Skilled Persons" (JPNP18002) of "Development of Integration Technology as the Core of Next Generation Artificial Intelligence and Robotics"
+ MEXT project: "Formation of R&D center to ensure transparency and reliability of generative AI models"
+ AIST program: [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html)
## License
[META LLAMA 3.3 COMMUNITY LICENSE](https://www.llama.com/llama3_3/license/) and [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
## Authors
Here are the team members:
- From [Institute of Science Tokyo Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
- [Koki Maeda](https://sites.google.com/view/silviase)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://sites.google.com/view/masanariohi)
- [Hinari Shimada](https://hinarishimada.github.io/portfolio)
- [Taihei Shiotani](https://github.com/inatoihs)
- [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
- From [Institute of Science Tokyo YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
- [Yukito Tajima](https://www.linkedin.com/in/yukito-tajima-51bbb2299)
- [Masaki Kawamura](https://x.com/Masakichi333210)
- From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
- [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
## How to cite
If you find our work helpful, please feel free to cite these papers.
```
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@misc{fujii2025rewritingpretrainingdataboosts,
title={Rewriting Pre-Training Data Boosts LLM Performance in Math and Code},
author={Kazuki Fujii and Yukito Tajima and Sakae Mizuki and Hinari Shimada and Taihei Shiotani and Koshiro Saito and Masanari Ohi and Masaki Kawamura and Taishi Nakamura and Takumi Okamoto and Shigeki Ishida and Kakeru Hattori and Youmi Ma and Hiroya Takamura and Rio Yokota and Naoaki Okazaki},
year={2025},
eprint={2505.02881},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.02881},
}
```
### References
```tex
@misc{dubey2024llama3herdmodels,
title={The Llama 3 Herd of Models},
author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
year={2024},
eprint={2407.21783},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.21783},
}
``` |
VanishedBrB/codegemma-7b | VanishedBrB | 2025-05-31T12:26:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/codegemma-7b",
"base_model:finetune:unsloth/codegemma-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T12:26:14Z | ---
base_model: unsloth/codegemma-7b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VanishedBrB
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codegemma-7b
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SVECTOR-CORPORATION/FAL-1.5 | SVECTOR-CORPORATION | 2025-05-31T12:19:15Z | 0 | 1 | null | [
"safetensors",
"fal_v1_5",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T10:27:04Z | ---
license: apache-2.0
---
|
Trevin007/insurance-estimator | Trevin007 | 2025-05-31T12:11:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T12:06:22Z | ---
license: apache-2.0
---
|
sizzlebop/Viper-Coder-v1.4-Q8_0-GGUF | sizzlebop | 2025-05-31T12:07:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"coder",
"trl",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:prithivMLmods/Viper-Coder-v1.4",
"base_model:quantized:prithivMLmods/Viper-Coder-v1.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T12:06:39Z | ---
license: apache-2.0
language:
- en
- zh
base_model: prithivMLmods/Viper-Coder-v1.4
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- coder
- trl
- llama-cpp
- gguf-my-repo
---
# sizzlebop/Viper-Coder-v1.4-Q8_0-GGUF
This model was converted to GGUF format from [`prithivMLmods/Viper-Coder-v1.4`](https://huggingface.co/prithivMLmods/Viper-Coder-v1.4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Viper-Coder-v1.4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sizzlebop/Viper-Coder-v1.4-Q8_0-GGUF --hf-file viper-coder-v1.4-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sizzlebop/Viper-Coder-v1.4-Q8_0-GGUF --hf-file viper-coder-v1.4-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sizzlebop/Viper-Coder-v1.4-Q8_0-GGUF --hf-file viper-coder-v1.4-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sizzlebop/Viper-Coder-v1.4-Q8_0-GGUF --hf-file viper-coder-v1.4-q8_0.gguf -c 2048
```
|
VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL-F16-GGUF | VanishedBrB | 2025-05-31T12:03:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL",
"base_model:quantized:VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T12:03:28Z | ---
base_model: VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL-F16-GGUF
This LoRA adapter was converted to GGUF format from [`VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL`](https://huggingface.co/VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/VanishedBrB/qwen2.5-coder-7b-bnb-4bit-velocity-SQL) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora qwen2.5-coder-7b-bnb-4bit-velocity-SQL-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora qwen2.5-coder-7b-bnb-4bit-velocity-SQL-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
TanAlexanderlz/RALL_RGBCROP_Aug16F-WD01 | TanAlexanderlz | 2025-05-31T12:03:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-31T09:26:03Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-WD01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-WD01
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7867
- Accuracy: 0.8414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.4949 | 0.0835 | 289 | 0.5840 | 0.6973 |
| 0.2616 | 1.0835 | 578 | 0.5304 | 0.7955 |
| 0.078 | 2.0835 | 867 | 0.7102 | 0.7975 |
| 0.0089 | 3.0835 | 1156 | 0.9156 | 0.7935 |
| 0.0012 | 4.0835 | 1445 | 1.0430 | 0.7914 |
| 0.0005 | 5.0835 | 1734 | 1.0521 | 0.8037 |
| 0.0006 | 6.0835 | 2023 | 1.0802 | 0.8119 |
| 0.0002 | 7.0835 | 2312 | 1.1696 | 0.8078 |
| 0.0003 | 8.0835 | 2601 | 1.1930 | 0.8139 |
| 0.0002 | 9.0835 | 2890 | 1.2366 | 0.8078 |
| 0.0002 | 10.0835 | 3179 | 1.2292 | 0.8098 |
| 0.0002 | 11.0817 | 3462 | 1.2295 | 0.8098 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yazidsupriadi/bot-detector-lstm | yazidsupriadi | 2025-05-31T11:58:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T04:13:00Z | # 🧠 Bot Detector LSTM
Model deteksi akun bot berbasis teks dan fitur numerik menggunakan LSTM.
---
## 📈 History Training
| Epoch | Loss | Accuracy | Precision | Recall | F1-Score |
|:-----:|:-----:|:--------:|:---------:|:------:|:--------:|
| 1 | 0.3796 | 0.8113 | 0.8136 | 0.8111 | 0.8108 |
| 2 | 0.3687 | 0.7997 | 0.7997 | 0.7998 | 0.7997 |
| 3 | 0.3574 | 0.8053 | 0.8109 | 0.8050 | 0.8043 |
| 4 | 0.3458 | 0.8375 | 0.8406 | 0.8373 | 0.8371 |
| 5 | 0.3562 | 0.7618 | 0.8391 | 0.7608 | 0.7469 |
| 6 | 0.3403 | 0.7650 | 0.8385 | 0.7641 | 0.7511 |
| 7 | 0.3323 | 0.8645 | 0.8646 | 0.8645 | 0.8645 |
| 8 | 0.3236 | 0.8475 | 0.8480 | 0.8474 | 0.8474 |
| 9 | 0.3206 | 0.8575 | 0.8594 | 0.8574 | 0.8573 |
| 10 | 0.3153 | 0.8508 | 0.8508 | 0.8508 | 0.8507 |
---
## 📊 Confusion Matrix

---
## 📦 Files
- `model.pth`
- `vocab.pkl`
- `scaler.pkl`
- `label_encoder.pkl`
- `history.json`
- `confusion_matrix.png`
---
## 🚀 Cara Load Model
```python
import torch
from model import BotDetector
import pickle
# Load model
model = BotDetector(
vocab_size=VOCAB_SIZE, # Ganti dengan ukuran vocab kamu
embed_dim=100,
hidden_dim=128,
num_numeric=4, # Jumlah fitur numerik
output_dim=2 # Jumlah kelas
)
model.load_state_dict(torch.load("model.pth"))
model.eval()
# Load scaler, vocab, dan label encoder
with open("scaler.pkl", "rb") as f:
scaler = pickle.load(f)
with open("vocab.pkl", "rb") as f:
vocab = pickle.load(f)
with open("label_encoder.pkl", "rb") as f:
label_encoder = pickle.load(f)
---
### ⚠️ Pastikan kamu sudah mencatat metric ke `history` saat training:
Tambahkan ini di bagian evaluasi per epoch:
```python
from sklearn.metrics import precision_score, recall_score, f1_score
# Setelah menghitung all_preds dan all_labels
prec = precision_score(all_labels, all_preds, average="macro", zero_division=0)
rec = recall_score(all_labels, all_preds, average="macro", zero_division=0)
f1 = f1_score(all_labels, all_preds, average="macro", zero_division=0)
history["precision"].append(prec)
history["recall"].append(rec)
history["f1"].append(f1)
|
mario81464/qwen-3B_instruct_base_sft_FEVER_4167 | mario81464 | 2025-05-31T11:54:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T11:53:51Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yazodi/youtube-video-recommender | yazodi | 2025-05-31T11:52:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T11:50:54Z | # 🎬 YouTube Video Recommendation System (Content-Based Filtering)
Bu proje, YouTube'daki videolara benzer içerikleri öneren bir **content-based filtering** sistemidir.
Kullanıcı bir video başlığı girer ve sistem, **başlık**, **açıklama** ve **etiketleri** temel alarak benzer 5 videoyu önerir.
---
## 📦 Dataset
- Kaynak: [YouTube Trending Videos - Kaggle](https://www.kaggle.com/datasets/datasnaek/youtube-new)
- Kullanılan dosya: `USvideos.csv` (küçültülmüş örneği: `USvideos_sample.csv`)
---
## 🧠 Kullanılan Teknolojiler
- Python, pandas, numpy
- scikit-learn → **TF-IDF Vectorizer**, **cosine similarity**
- joblib (model kaydetme)
- matplotlib, wordcloud (görselleştirme)
- Streamlit (web uygulaması)
---
## 🔍 Proje Adımları
1. `title`, `description`, `tags` sütunları seçildi ve temizlendi.
2. Bu sütunlar birleştirilerek tek bir `text` kolonu oluşturuldu.
3. **TF-IDF** ile metin vektörleştirildi.
4. **Cosine similarity** ile videolar arası benzerlik hesaplandı.
5. Kullanıcı başlığına en çok benzeyen ilk 5 video önerildi.
---
## 💻 Web Uygulaması (Streamlit)
Projeye bir Streamlit arayüzü entegre edildi.
### 🚀 Uygulamayı Başlatmak:
```bash
pip install -r requirements.txt
streamlit run app.py
Kullanıcı arayüzü üzerinden bir video başlığı girerek önerileri alabilirsiniz.(verisetindene varsa)
📊 Görselleştirme
En sık kullanılan 15 YouTube etiketi
Başlıklarda en sık geçen 15 kelime
Kelime bulutu (word cloud) gösterimi
📁 Dosya Yapısı
youtube-recommendation/
├── USvideos_sample.csv
├── app.py
├── youtube_recommender.ipynb
├── tfidf_vectorizer.pkl
├── cosine_similarity.pkl
├── youtube_df.pkl
├── title_indices.pkl
├── requirements.txt
└── README.md
🌐 Model Paylaşımı (Opsiyonel)
Eğitilen modeller Hugging Face üzerine yüklenebilir:
https://huggingface.co/yazodi/youtube-video-recommender
✍️ Yazar
Hande Çarkcı
📫 GitHub | 💡 Data Science & AI Öğrencisi
📦 Requirements
streamlit
pandas
numpy
scikit-learn
joblib
matplotlib
wordcloud
--- |
hmueller25/my_awesome_billsum_model | hmueller25 | 2025-05-31T11:52:50Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-14T14:03:56Z | ---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9010
- Rouge1: 0.1153
- Rouge2: 0.0398
- Rougel: 0.1012
- Rougelsum: 0.1015
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.0675 | 0.1107 | 0.0296 | 0.0941 | 0.0949 | 20.0 |
| No log | 2.0 | 50 | 2.9651 | 0.1084 | 0.0347 | 0.0931 | 0.0932 | 20.0 |
| No log | 3.0 | 75 | 2.9167 | 0.117 | 0.0377 | 0.101 | 0.1011 | 20.0 |
| No log | 4.0 | 100 | 2.9010 | 0.1153 | 0.0398 | 0.1012 | 0.1015 | 20.0 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
hugosisal/camembert_foot_0_1_classifier | hugosisal | 2025-05-31T11:48:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-31T11:48:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Snarcy/mit-b3_train_006 | Snarcy | 2025-05-31T11:45:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b3",
"base_model:finetune:nvidia/mit-b3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T17:43:54Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b3
tags:
- generated_from_trainer
model-index:
- name: mit-b3_train_006
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b3_train_006
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0081
- Mean Iou: 0.8027
- Mean Accuracy: 0.8983
- Overall Accuracy: 0.9968
- Per Category Iou: [0.9968164525692883, 0.608572691117985]
- Per Category Accuracy: [0.9980630611071355, 0.7985002512465695]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------:|:----------------------------------------:|
| 0.007 | 1.9608 | 400 | 0.0136 | 0.7471 | 0.8996 | 0.9950 | [0.9950065436234374, 0.49921904410999063] | [0.9962228811962033, 0.8030355739373558] |
| 0.0045 | 3.9216 | 800 | 0.0110 | 0.7663 | 0.9213 | 0.9955 | [0.9954771945967922, 0.537074590740347] | [0.9964270346514705, 0.8462628683339132] |
| 0.0041 | 5.8824 | 1200 | 0.0084 | 0.7945 | 0.8601 | 0.9969 | [0.9969192716989699, 0.5920334897881517] | [0.9986419357106135, 0.7215801476556762] |
| 0.0043 | 7.8431 | 1600 | 0.0081 | 0.7998 | 0.8838 | 0.9969 | [0.996857754270491, 0.6027318432370222] | [0.9982854468333784, 0.7692396892273201] |
| 0.0029 | 9.8039 | 2000 | 0.0079 | 0.7996 | 0.8766 | 0.9969 | [0.996911452537117, 0.6022825416409624] | [0.9984289060181608, 0.7547446948320513] |
| 0.0031 | 11.7647 | 2400 | 0.0077 | 0.8029 | 0.8846 | 0.9969 | [0.9969304678297617, 0.6088367880656552] | [0.9983480602902818, 0.7708888974785152] |
| 0.0033 | 13.7255 | 2800 | 0.0088 | 0.7913 | 0.8917 | 0.9966 | [0.9965605987631156, 0.5860159757000183] | [0.9978872956176418, 0.7854998518289462] |
| 0.0033 | 15.6863 | 3200 | 0.0082 | 0.8011 | 0.8954 | 0.9968 | [0.99679644039349, 0.6053462677462835] | [0.9980785745306849, 0.7927537912463118] |
| 0.0042 | 17.6471 | 3600 | 0.0081 | 0.8014 | 0.9014 | 0.9968 | [0.9967558676600388, 0.605952715878113] | [0.9979635832777769, 0.8047749732647881] |
| 0.0024 | 19.6078 | 4000 | 0.0081 | 0.8027 | 0.8983 | 0.9968 | [0.9968164525692883, 0.608572691117985] | [0.9980630611071355, 0.7985002512465695] |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
TanAlexanderlz/RALL_RGBCROP_Aug16F-WD005 | TanAlexanderlz | 2025-05-31T11:43:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-31T09:24:01Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-WD005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-WD005
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8464
- Accuracy: 0.8253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.4659 | 0.0835 | 289 | 0.5385 | 0.7280 |
| 0.1993 | 1.0835 | 578 | 0.4735 | 0.8078 |
| 0.032 | 2.0835 | 867 | 0.6537 | 0.8098 |
| 0.0061 | 3.0835 | 1156 | 0.7957 | 0.8037 |
| 0.0032 | 4.0835 | 1445 | 0.9085 | 0.8119 |
| 0.0004 | 5.0835 | 1734 | 0.9475 | 0.8098 |
| 0.0007 | 6.0835 | 2023 | 0.9976 | 0.8139 |
| 0.0002 | 7.0835 | 2312 | 1.0835 | 0.8139 |
| 0.0002 | 8.0835 | 2601 | 1.1102 | 0.8119 |
| 0.0002 | 9.0835 | 2890 | 1.1465 | 0.8078 |
| 0.0001 | 10.0835 | 3179 | 1.1561 | 0.8078 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
luckysantoso/adapter_lora_lawbot_deepseek | luckysantoso | 2025-05-31T11:32:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T11:31:52Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheWeeeed/chinese-paragraph-selector | TheWeeeed | 2025-05-31T11:29:43Z | 0 | 0 | null | [
"safetensors",
"bert",
"extractive-qa",
"chinese",
"two-stage-qa",
"question-answering",
"zh",
"license:apache-2.0",
"region:us"
] | question-answering | 2025-05-31T11:08:42Z | ---
license: apache-2.0
language:
- zh
tags:
- extractive-qa
- bert
- chinese
- two-stage-qa
pipeline_tag: question-answering
---
## 模型描述
* **模型類型**: bert-base-chinese
* **語言**: 中文
* **訓練數據**: https://github.com/YuTsyh/Chinese-Extractive-Question-Answering-QA-/tree/main/data
* **相關項目/GitHub**: https://github.com/YuTsyh/Chinese-Extractive-Question-Answering-QA-.git
* **相關模型**:
* TheWeeeed/chinese-paragraph-selector
* TheWeeeed/chinese-extractive-qa |
Miamoto/Phi-4-multimodal-instruct-425h-v2 | Miamoto | 2025-05-31T11:29:43Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"phi4mm",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-4-multimodal-instruct",
"base_model:finetune:microsoft/Phi-4-multimodal-instruct",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-28T10:48:23Z | ---
library_name: transformers
license: mit
base_model: microsoft/Phi-4-multimodal-instruct
tags:
- generated_from_trainer
model-index:
- name: Phi-4-multimodal-instruct-425h-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi-4-multimodal-instruct-425h-v2
This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 2.18.0
- Tokenizers 0.21.0
|
TanAlexanderlz/RALL_RGBCROP_Aug16F-WD001 | TanAlexanderlz | 2025-05-31T11:25:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-31T09:29:50Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-WD001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-WD001
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6801
- Accuracy: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4792 | 0.0835 | 289 | 0.5590 | 0.7157 |
| 0.2392 | 1.0835 | 578 | 0.5009 | 0.7955 |
| 0.0414 | 2.0835 | 867 | 0.6815 | 0.8016 |
| 0.0484 | 3.0835 | 1156 | 0.8761 | 0.8016 |
| 0.0032 | 4.0835 | 1445 | 0.9753 | 0.8139 |
| 0.0004 | 5.0835 | 1734 | 1.0459 | 0.8057 |
| 0.002 | 6.0835 | 2023 | 1.1537 | 0.7914 |
| 0.0002 | 7.0835 | 2312 | 1.1430 | 0.8016 |
| 0.0002 | 8.0835 | 2601 | 1.1876 | 0.7996 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
QuanHoangNgoc/wav2vec2-base-960h_310145 | QuanHoangNgoc | 2025-05-31T11:18:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"speech-to-text",
"vietnamese",
"uit-vimd",
"generated_from_trainer",
"vi",
"dataset:uit-vimd",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-31T01:45:43Z | ---
library_name: transformers
language:
- vi
base_model: wav2vec2-base-960h
tags:
- speech-to-text
- vietnamese
- uit-vimd
- generated_from_trainer
datasets:
- uit-vimd
metrics:
- wer
model-index:
- name: wav2vec2-base-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: UIT-ViMD
type: uit-vimd
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h
This model is a fine-tuned version of [wav2vec2-base-960h](https://huggingface.co/wav2vec2-base-960h) on the UIT-ViMD dataset.
It achieves the following results on the evaluation set:
- Loss: 4450.7129
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1877
- training_steps: 45048
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 20957.0712 | 0.0479 | 90 | 19798.2793 | 1.1701 |
| 19576.6615 | 0.0958 | 180 | 17914.9492 | 0.9992 |
| 13245.4418 | 0.1438 | 270 | 10092.9766 | 1.0 |
| 8553.1667 | 0.1917 | 360 | 6336.5059 | 1.0 |
| 6115.658 | 0.2396 | 450 | 5300.0620 | 1.0 |
| 7133.3411 | 0.2875 | 540 | 4846.8809 | 1.0 |
| 5470.592 | 0.3355 | 630 | 4605.5454 | 1.0 |
| 5727.5495 | 0.3834 | 720 | 4530.3584 | 1.0 |
| 5829.9049 | 0.4313 | 810 | 4497.2104 | 1.0 |
| 6056.6081 | 0.4792 | 900 | 4464.2661 | 1.0 |
| 4764.8876 | 0.5272 | 990 | 4477.3540 | 1.0 |
| 5050.1385 | 0.5751 | 1080 | 4459.1509 | 1.0 |
| 4650.4462 | 0.6230 | 1170 | 4470.8354 | 1.0 |
| 4949.9332 | 0.6709 | 1260 | 4456.6084 | 1.0 |
| 5222.6454 | 0.7188 | 1350 | 4466.2896 | 1.0 |
| 4829.4878 | 0.7668 | 1440 | 4469.7002 | 1.0 |
| 5221.8602 | 0.8147 | 1530 | 4491.8965 | 1.0 |
| 5194.6597 | 0.8626 | 1620 | 4446.9839 | 1.0 |
| 4551.5356 | 0.9105 | 1710 | 4471.9189 | 1.0 |
| 4398.6497 | 0.9585 | 1800 | 4466.7090 | 1.0 |
| 4808.224 | 1.0064 | 1890 | 4453.1763 | 1.0 |
| 4893.1133 | 1.0543 | 1980 | 4475.0298 | 1.0 |
| 4624.7088 | 1.1022 | 2070 | 4456.4785 | 1.0 |
| 4865.1897 | 1.1502 | 2160 | 4494.1143 | 1.0 |
| 4730.089 | 1.1981 | 2250 | 4465.1245 | 1.0 |
| 4604.9293 | 1.2460 | 2340 | 4446.9834 | 1.0 |
| 4758.8199 | 1.2939 | 2430 | 4445.7725 | 1.0 |
| 4388.5534 | 1.3419 | 2520 | 4471.3955 | 1.0 |
| 4762.9514 | 1.3898 | 2610 | 4465.6992 | 1.0 |
| 4692.0712 | 1.4377 | 2700 | 4457.9404 | 1.0 |
| 4745.1354 | 1.4856 | 2790 | 4469.7354 | 1.0 |
| 4821.5339 | 1.5335 | 2880 | 4453.1030 | 1.0 |
| 4522.003 | 1.5815 | 2970 | 4477.9453 | 1.0 |
| 4558.7674 | 1.6294 | 3060 | 4471.9019 | 1.0 |
| 4595.7326 | 1.6773 | 3150 | 4472.9482 | 1.0 |
| 4480.819 | 1.7252 | 3240 | 4452.7534 | 1.0 |
| 4677.2083 | 1.7732 | 3330 | 4464.3735 | 1.0 |
| 4748.888 | 1.8211 | 3420 | 4462.0527 | 1.0 |
| 4376.7969 | 1.8690 | 3510 | 4478.0942 | 1.0 |
| 4465.0486 | 1.9169 | 3600 | 4458.4336 | 1.0 |
| 4582.4323 | 1.9649 | 3690 | 4465.0527 | 1.0 |
| 4338.3216 | 2.0128 | 3780 | 4470.2817 | 1.0 |
| 4741.3194 | 2.0607 | 3870 | 4473.3320 | 1.0 |
| 4606.2461 | 2.1086 | 3960 | 4473.0693 | 1.0 |
| 4558.2244 | 2.1565 | 4050 | 4445.2275 | 1.0 |
| 4550.3937 | 2.2045 | 4140 | 4456.3667 | 1.0 |
| 4772.4045 | 2.2524 | 4230 | 4460.6973 | 1.0 |
| 4740.5282 | 2.3003 | 4320 | 4429.3594 | 1.0 |
| 4484.9518 | 2.3482 | 4410 | 4469.2808 | 1.0 |
| 4525.0946 | 2.3962 | 4500 | 4461.0859 | 1.0 |
| 4645.4323 | 2.4441 | 4590 | 4465.1885 | 1.0 |
| 4565.2148 | 2.4920 | 4680 | 4460.4863 | 1.0 |
| 4510.4193 | 2.5399 | 4770 | 4441.4009 | 1.0 |
| 4547.5456 | 2.5879 | 4860 | 4450.7295 | 1.0 |
| 4737.7708 | 2.6358 | 4950 | 4448.0337 | 1.0 |
| 4614.7452 | 2.6837 | 5040 | 4439.5439 | 1.0 |
| 4508.1675 | 2.7316 | 5130 | 4467.0513 | 1.0 |
| 4690.9996 | 2.7796 | 5220 | 4442.8369 | 1.0 |
| 4539.6363 | 2.8275 | 5310 | 4468.2847 | 1.0 |
| 4865.7856 | 2.8754 | 5400 | 4462.8374 | 1.0 |
| 4358.2491 | 2.9233 | 5490 | 4438.5737 | 1.0 |
| 4721.1706 | 2.9712 | 5580 | 4462.2534 | 1.0 |
| 4382.4796 | 3.0192 | 5670 | 4470.7866 | 1.0 |
| 4618.6771 | 3.0671 | 5760 | 4446.4302 | 1.0 |
| 4569.4405 | 3.1150 | 5850 | 4465.9839 | 1.0 |
| 4733.0786 | 3.1629 | 5940 | 4469.7568 | 1.0 |
| 4545.3672 | 3.2109 | 6030 | 4481.2061 | 1.0 |
| 4613.2943 | 3.2588 | 6120 | 4467.6055 | 1.0 |
| 4370.2617 | 3.3067 | 6210 | 4466.9160 | 1.0 |
| 4683.7261 | 3.3546 | 6300 | 4468.3623 | 1.0 |
| 4678.0582 | 3.4026 | 6390 | 4465.7158 | 1.0 |
| 4873.105 | 3.4505 | 6480 | 4448.1323 | 1.0 |
| 4587.8546 | 3.4984 | 6570 | 4440.6294 | 1.0 |
| 4492.6784 | 3.5463 | 6660 | 4443.6929 | 1.0 |
| 4459.1823 | 3.5942 | 6750 | 4450.8140 | 1.0 |
| 4650.9067 | 3.6422 | 6840 | 4463.3242 | 1.0 |
| 4682.5829 | 3.6901 | 6930 | 4469.6592 | 1.0 |
| 4704.2895 | 3.7380 | 7020 | 4492.6938 | 1.0 |
| 4832.8989 | 3.7859 | 7110 | 4450.7129 | 1.0 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0
|
mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF | mradermacher | 2025-05-31T11:16:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"en",
"dataset:Allen-UQ/cora_wo_nei",
"base_model:Allen-UQ/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens",
"base_model:quantized:Allen-UQ/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T10:42:23Z | ---
base_model: Allen-UQ/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens
datasets: Allen-UQ/cora_wo_nei
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Allen-UQ/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens-GGUF/resolve/main/Qwen2.5-7B-Instruct-GRPO-Nei-Tokens.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Bekhouche/TPS-ResNet-BiLSTM-Attn-CS-STR | Bekhouche | 2025-05-31T11:12:37Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-08-26T07:00:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LiVE-Sophie-Rain-Spider-Man-X-Videosssss/Sophie.Rain.Spiderman.Video.Tutorial.Official | LiVE-Sophie-Rain-Spider-Man-X-Videosssss | 2025-05-31T11:11:49Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T11:11:37Z | 39 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter |
BienKieu/codeT5-phase2-version3 | BienKieu | 2025-05-31T11:07:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:HuyTran1301/codeT5-phase2-version3-BienKieu",
"base_model:finetune:HuyTran1301/codeT5-phase2-version3-BienKieu",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-31T00:50:23Z | ---
library_name: transformers
license: apache-2.0
base_model: HuyTran1301/codeT5-phase2-version3-BienKieu
tags:
- generated_from_trainer
model-index:
- name: codeT5-phase2-version3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeT5-phase2-version3
This model is a fine-tuned version of [HuyTran1301/codeT5-phase2-version3-BienKieu](https://huggingface.co/HuyTran1301/codeT5-phase2-version3-BienKieu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 14
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
amgule/memeModelMerged | amgule | 2025-05-31T11:07:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2-VL-2B-Instruct",
"base_model:finetune:unsloth/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-31T11:02:06Z | ---
base_model: unsloth/Qwen2-VL-2B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** amgule
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
morphsmirrors/Carolina.Cazadora.Leaked.Video.Viral.Video | morphsmirrors | 2025-05-31T11:06:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T11:05:35Z | <a href="https://lojinx.cfd/koljiuhg"> 🌐 Click Here To link (Carolina.Cazadora.Leaked.Video.Viral.Video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://lojinx.cfd/koljiuhg"> 🌐 Carolina.Cazadora.Leaked.Video.Viral.Video |
vertings6/eba30e02-bea8-4302-a6bd-706cf500ab01 | vertings6 | 2025-05-31T11:06:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T10:14:13Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eba30e02-bea8-4302-a6bd-706cf500ab01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0dca13899fec4bb2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/eba30e02-bea8-4302-a6bd-706cf500ab01
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/0dca13899fec4bb2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b444b15-8349-4830-ba35-06ef29cdc825
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0b444b15-8349-4830-ba35-06ef29cdc825
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# eba30e02-bea8-4302-a6bd-706cf500ab01
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4112 | 0.0002 | 1 | 1.3611 |
| 1.2007 | 0.0415 | 250 | 1.2494 |
| 1.5078 | 0.0831 | 500 | 1.1931 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik87/2f77d672-6efc-48a7-8c1d-cd9076415fbc | dimasik87 | 2025-05-31T11:00:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T10:17:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2f77d672-6efc-48a7-8c1d-cd9076415fbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0dca13899fec4bb2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik87/2f77d672-6efc-48a7-8c1d-cd9076415fbc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/0dca13899fec4bb2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b444b15-8349-4830-ba35-06ef29cdc825
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0b444b15-8349-4830-ba35-06ef29cdc825
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 2f77d672-6efc-48a7-8c1d-cd9076415fbc
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3477 | 0.0002 | 1 | 1.3611 |
| 1.413 | 0.0554 | 250 | 1.3221 |
| 1.3329 | 0.1108 | 500 | 1.3019 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mmmanuel/DPO_ONSFT_HALF_DATA | mmmanuel | 2025-05-31T10:51:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:15:02Z | ---
library_name: transformers
tags:
- unsloth
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ChaosFlowerRP-24B-GGUF | mradermacher | 2025-05-31T10:49:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"storytelling",
"en",
"base_model:Vortex5/ChaosFlowerRP-24B",
"base_model:quantized:Vortex5/ChaosFlowerRP-24B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:21:43Z | ---
base_model: Vortex5/ChaosFlowerRP-24B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- roleplay
- storytelling
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Vortex5/ChaosFlowerRP-24B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ChaosFlowerRP-24B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ChaosFlowerRP-24B-GGUF/resolve/main/ChaosFlowerRP-24B.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF | mradermacher | 2025-05-31T10:47:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:iTzMiNOS/BERT-finetuned-multiclass-tweet-sentiment-analysis",
"base_model:quantized:iTzMiNOS/BERT-finetuned-multiclass-tweet-sentiment-analysis",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T10:41:46Z | ---
base_model: iTzMiNOS/BERT-finetuned-multiclass-tweet-sentiment-analysis
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/iTzMiNOS/BERT-finetuned-multiclass-tweet-sentiment-analysis
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BERT-finetuned-multiclass-tweet-sentiment-analysis-GGUF/resolve/main/BERT-finetuned-multiclass-tweet-sentiment-analysis.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF | mradermacher | 2025-05-31T10:41:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"en",
"base_model:sentence-transformers/distilbert-base-nli-stsb-quora-ranking",
"base_model:quantized:sentence-transformers/distilbert-base-nli-stsb-quora-ranking",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-31T10:39:18Z | ---
base_model: sentence-transformers/distilbert-base-nli-stsb-quora-ranking
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sentence-transformers/distilbert-base-nli-stsb-quora-ranking
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/distilbert-base-nli-stsb-quora-ranking-GGUF/resolve/main/distilbert-base-nli-stsb-quora-ranking.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phospho-app/Starkosaure-ACT-Stuffed_Animal_V4_3cam-fcvnz | phospho-app | 2025-05-31T10:39:51Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-05-31T08:27:55Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [Starkosaure/Stuffed_Animal_V4_3cam](https://huggingface.co/datasets/Starkosaure/Stuffed_Animal_V4_3cam)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
mradermacher/ms-marco-TinyBERT-L4-GGUF | mradermacher | 2025-05-31T10:38:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T10:38:29Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L4
|
Snarcy/mit-b3_train_002 | Snarcy | 2025-05-31T10:31:30Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b3",
"base_model:finetune:nvidia/mit-b3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T13:19:38Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b3
tags:
- generated_from_trainer
model-index:
- name: mit-b3_train_002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b3_train_002
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0122
- Mean Iou: 0.8355
- Mean Accuracy: 0.9566
- Overall Accuracy: 0.9952
- Per Category Iou: [0.9951368914976415, 0.6757905252923494]
- Per Category Accuracy: [0.996049196563814, 0.9171313120552781]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:|
| 0.007 | 2.0833 | 400 | 0.0131 | 0.8221 | 0.9042 | 0.9952 | [0.9951584058357217, 0.6490638706777638] | [0.9972377909537888, 0.8111243732432046] |
| 0.0043 | 4.1667 | 800 | 0.0153 | 0.7982 | 0.9295 | 0.9938 | [0.9937120671918084, 0.6027841835398664] | [0.9952089819027619, 0.863833681565179] |
| 0.004 | 6.25 | 1200 | 0.0161 | 0.7967 | 0.9518 | 0.9934 | [0.993300293553149, 0.6000143771116383] | [0.9942961825515136, 0.9093717542411249] |
| 0.0031 | 8.3333 | 1600 | 0.0140 | 0.8142 | 0.9492 | 0.9943 | [0.994241340010433, 0.6341181890351392] | [0.9953068507064432, 0.9031277918065849] |
| 0.0038 | 10.4167 | 2000 | 0.0119 | 0.8314 | 0.9357 | 0.9952 | [0.9951869907043599, 0.6675436295176828] | [0.9965656727929841, 0.874774537029191] |
| 0.0032 | 12.5 | 2400 | 0.0120 | 0.8333 | 0.9544 | 0.9951 | [0.9950658996311038, 0.6715279447784458] | [0.9960269658284403, 0.9126958955449728] |
| 0.0033 | 14.5833 | 2800 | 0.0151 | 0.8128 | 0.9600 | 0.9941 | [0.9940333043073947, 0.6315481571210666] | [0.9948554864263014, 0.9252346630705575] |
| 0.0026 | 16.6667 | 3200 | 0.0150 | 0.8125 | 0.9583 | 0.9941 | [0.9940393209623042, 0.6308705256895315] | [0.9949009121217156, 0.9216514663264244] |
| 0.0029 | 18.75 | 3600 | 0.0122 | 0.8355 | 0.9566 | 0.9952 | [0.9951368914976415, 0.6757905252923494] | [0.996049196563814, 0.9171313120552781] |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Sharvik/ppo-LunarLander-v2 | Sharvik | 2025-05-31T10:30:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-31T10:30:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.55 +/- 37.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
unknownwore/Full.glenn.greenwald.video | unknownwore | 2025-05-31T10:20:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T10:19:22Z | <a href="https://lojinx.cfd/koljiuhg"> 🌐 Click Here To link (Full.glenn.greenwald.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://lojinx.cfd/koljiuhg"> 🌐 Full.glenn.greenwald.video |
E-katrin/for_zero-shot | E-katrin | 2025-05-31T10:09:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"cobald_parser",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-05-31T10:07:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hugosisal/bert_foot_0_1_classifier | hugosisal | 2025-05-31T10:08:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-31T10:07:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
th32nd/ARLshirt | th32nd | 2025-05-31T10:04:51Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T09:39:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ARLshirt
---
# Arlshirt
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ARLshirt` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ARLshirt",
"lora_weights": "https://huggingface.co/th32nd/ARLshirt/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('th32nd/ARLshirt', weight_name='lora.safetensors')
image = pipeline('ARLshirt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2150
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/th32nd/ARLshirt/discussions) to add images that show off what you’ve made with this LoRA.
|
ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance_pipeline_v2-500-v1 | ibrahimbukhariLingua | 2025-05-31T10:04:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T10:04:20Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-en-wikipedia-finance_pipeline_v2-500-v1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-en-wikipedia-finance_pipeline_v2-500-v1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance_pipeline_v2-500-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmbc0joxw0bb985uuzril69r5_cmbc1eja10bg685uug7i8geb7 | BootesVoid | 2025-05-31T10:04:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T10:04:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LEA
---
# Cmbc0Joxw0Bb985Uuzril69R5_Cmbc1Eja10Bg685Uug7I8Geb7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LEA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LEA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbc0joxw0bb985uuzril69r5_cmbc1eja10bg685uug7i8geb7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbc0joxw0bb985uuzril69r5_cmbc1eja10bg685uug7i8geb7', weight_name='lora.safetensors')
image = pipeline('LEA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbc0joxw0bb985uuzril69r5_cmbc1eja10bg685uug7i8geb7/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Fastest-Roberta-model-GGUF | mradermacher | 2025-05-31T10:02:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:HabibaElbehairy/Fastest-Roberta-model",
"base_model:quantized:HabibaElbehairy/Fastest-Roberta-model",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T10:00:11Z | ---
base_model: HabibaElbehairy/Fastest-Roberta-model
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/HabibaElbehairy/Fastest-Roberta-model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Fastest-Roberta-model-GGUF/resolve/main/Fastest-Roberta-model.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ms-marco-MiniLM-L12-v2-GGUF | mradermacher | 2025-05-31T10:02:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-MiniLM-L12-v2",
"base_model:quantized:cross-encoder/ms-marco-MiniLM-L12-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T10:00:54Z | ---
base_model: cross-encoder/ms-marco-MiniLM-L12-v2
datasets:
- sentence-transformers/msmarco
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-MiniLM-L12-v2-GGUF/resolve/main/ms-marco-MiniLM-L12-v2.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ms-marco-TinyBERT-L2-GGUF | mradermacher | 2025-05-31T10:00:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-TinyBERT-L2",
"base_model:quantized:cross-encoder/ms-marco-TinyBERT-L2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:59:11Z | ---
base_model: cross-encoder/ms-marco-TinyBERT-L2
datasets:
- sentence-transformers/msmarco
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L2-GGUF/resolve/main/ms-marco-TinyBERT-L2.f16.gguf) | f16 | 0.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/intention_classify-GGUF | mradermacher | 2025-05-31T09:59:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TOPAI-Network/intention_classify",
"base_model:quantized:TOPAI-Network/intention_classify",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:57:31Z | ---
base_model: TOPAI-Network/intention_classify
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TOPAI-Network/intention_classify
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/intention_classify-GGUF/resolve/main/intention_classify.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RajeevanL/xlm-roberta-small | RajeevanL | 2025-05-31T09:59:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-05-31T09:58:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ash2749/llama3.1_8b_instruct_fullconv | Ash2749 | 2025-05-31T09:59:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:56:16Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ash2749
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/quora-roberta-base-GGUF | mradermacher | 2025-05-31T09:57:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/quora-duplicates",
"base_model:cross-encoder/quora-roberta-base",
"base_model:quantized:cross-encoder/quora-roberta-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:53:57Z | ---
base_model: cross-encoder/quora-roberta-base
datasets:
- sentence-transformers/quora-duplicates
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/quora-roberta-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/quora-roberta-base-GGUF/resolve/main/quora-roberta-base.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ms-marco-TinyBERT-L6-GGUF | mradermacher | 2025-05-31T09:54:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-TinyBERT-L6",
"base_model:quantized:cross-encoder/ms-marco-TinyBERT-L6",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-05-31T09:52:03Z | ---
base_model: cross-encoder/ms-marco-TinyBERT-L6
datasets:
- sentence-transformers/msmarco
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ms-marco-TinyBERT-L6-GGUF/resolve/main/ms-marco-TinyBERT-L6.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LaaP-ai/finvix1.4-1.5B | LaaP-ai | 2025-05-31T09:53:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:52:51Z | ---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LaaP-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
antoine-444/aqua_rat_model | antoine-444 | 2025-05-31T09:51:41Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"trl",
"sft",
"license:mit",
"region:us"
] | null | 2025-05-31T08:12:23Z | ---
license: mit
tags:
- trl
- sft
---
|
suzii/gemma-3-4B-function-calling-v0.4 | suzii | 2025-05-31T09:51:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-31T09:48:34Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** suzii
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LaaP-ai/finvix1.3-1.5B | LaaP-ai | 2025-05-31T09:48:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T09:47:45Z | ---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LaaP-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
viazzana/vit-fruits-classifier | viazzana | 2025-05-31T09:48:36Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-30T11:56:36Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-fruits-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Custom fruit image dataset (uploaded from GitHub) without augmentation
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9663461538461539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-fruits-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Custom fruit image dataset (uploaded from GitHub) without augmentation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1299
- Accuracy: 0.9663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1341 | 1.0 | 520 | 0.1599 | 0.9538 |
| 0.0929 | 2.0 | 1040 | 0.1430 | 0.9577 |
| 0.0834 | 3.0 | 1560 | 0.1416 | 0.9606 |
| 0.072 | 4.0 | 2080 | 0.1385 | 0.9596 |
| 0.0536 | 5.0 | 2600 | 0.1386 | 0.9606 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
doomslayer2022/ppo-LunarLander-v2 | doomslayer2022 | 2025-05-31T09:47:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-31T09:43:14Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.99 +/- 18.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1 | ibrahimbukhariLingua | 2025-05-31T09:42:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:42:06Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-7b-en-wikipedia-finance_reasoning_distilled-500-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/DIMI-embedding-v3-GGUF | mradermacher | 2025-05-31T09:41:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:685672",
"loss:MultipleNegativesRankingLoss",
"en",
"base_model:AhmedZaky1/DIMI-embedding-v3",
"base_model:quantized:AhmedZaky1/DIMI-embedding-v3",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-31T09:34:54Z | ---
base_model: AhmedZaky1/DIMI-embedding-v3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:685672
- loss:MultipleNegativesRankingLoss
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AhmedZaky1/DIMI-embedding-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DIMI-embedding-v3-GGUF/resolve/main/DIMI-embedding-v3.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NaverHustQA/LawVinaLlama | NaverHustQA | 2025-05-31T09:36:35Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"retrieval-augmented-generation",
"unsloth",
"trl",
"sft",
"en",
"vi",
"base_model:vilm/vinallama-7b",
"base_model:finetune:vilm/vinallama-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-15T14:53:48Z | ---
base_model: vilm/vinallama-7b
language:
- en
- vi
license: apache-2.0
tags:
- text-generation-inference
- retrieval-augmented-generation
- transformers
- unsloth
- llama
- trl
- sft
---
## Model Card: LawVinaLlama
**Model Description:**
LawVinaLlama is a large language model (LLM) specialized in **Vietnamese law**, fine-tuned from the Llama architecture. The model has been trained on real legal documents to improve its ability to **reason, retrieve legal information, and summarize legal content**.
**Main Data Sources:**
- **150,000 Q&A** crawled and processed from *Thư Viện Pháp Luật* (Vietnamese Legal Library)
- **40,000 Q&A** translated and summarized from international law
- **10,000 Q&A** translated and summarized from international law (duplicate, possibly an error)
- **50,000 Reasoning Q&A** generated by GPT-4.0/Gemini
**Intended Use Cases:**
LawVinaLlama is suitable for the following tasks:
- **Answering legal questions** / **Providing legal answers based on a given context**
- **Summarizing legal content**
**Limitations:**
LawVinaLlama may still encounter some limitations:
- It may generate **misleading or inaccurate** information.
- Its **performance depends on the quality of the input data**.
**How to Use:**
Load model
```python
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = 'NaverHustQA/LawVinaLlama',
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
```
Generate
```python
PROMPT = """
### Hướng dẫn: Bạn là một trợ lí Tiếng Việt. Hãy luôn trả lời một cách trung thực và an toàn
Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, nguy hiểm hoặc bất hợp pháp nào
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác
Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
### Câu hỏi: {input}
"""
question = """Trình bày về thủ tục li hôn ?"""
text = PROMPT.format_map({
'input': question,
})
input_ids = tokenizer(text, return_tensors='pt', add_special_tokens=False).to('cuda')
generated_ids = model.generate(
input_ids=input_ids['input_ids'],
max_new_tokens=1024,
do_sample=True,
top_p=0.95,
top_k=40,
temperature=0.3,
repetition_penalty=1.1,
no_repeat_ngram_size=7,
num_beams=5,
)
a = tokenizer.batch_decode(generated_ids)[0]
# print(a.split('### Trả lời:')[1])
print(a)
```
**Citation:**
Please cite our paper if you find our work helpful:
```
@article{10.1145/3732938,
author = {Le, Huong and Luu, Ngoc and Nguyen, Thanh and Dao, Tuan and Dinh, Sang},
title = {Optimizing Answer Generator in Vietnamese Legal Question Answering Systems Using Language Models},
year = {2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {2375-4699},
url = {https://doi.org/10.1145/3732938},
doi = {10.1145/3732938},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
}
``` |
mradermacher/anime-senko-chat-enhanced-GGUF | mradermacher | 2025-05-31T09:35:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:EnterNameBros/anime-senko-chat-enhanced",
"base_model:quantized:EnterNameBros/anime-senko-chat-enhanced",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:26:39Z | ---
base_model: EnterNameBros/anime-senko-chat-enhanced
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EnterNameBros/anime-senko-chat-enhanced
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/anime-senko-chat-enhanced-GGUF/resolve/main/anime-senko-chat-enhanced.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
009-Sophie-Rain-SpiderMan-Videosss/Sophie.Rain.SpiderMan.Video.Tutorial.online | 009-Sophie-Rain-SpiderMan-Videosss | 2025-05-31T09:33:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T09:32:58Z | 39 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
legends810/gemma-shayari-finetuned | legends810 | 2025-05-31T09:23:54Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:12:56Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-shayari-finetuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-shayari-finetuned
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="legends810/gemma-shayari-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jacktol/atc-pilot-speaker-role-classification-model | jacktol | 2025-05-31T09:15:23Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T09:47:40Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- accuracy
- precision
- recall
base_model:
- microsoft/deberta-v3-large
model-index:
- name: ATC-Pilot-Speaker Role Classifier
results:
- task:
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 96.64
- name: Precision
type: precision
value: 96.4
- name: Recall
type: recall
value: 96.91
- name: F1 Score
type: f1
value: 96.65
---
# ATC-Pilot Speaker Role Classification Model
This is a binary sequence classification model designed to determine whether a given air traffic communication utterance originates from a **pilot** or an **air traffic controller (ATC)**, based on text alone.
Traditionally, speaker role attribution in air traffic communication relies on acoustic features such as voice characteristics and channel separation. This model departs from that convention by tackling the task entirely in the **text domain**, using a transformer-based architecture fine-tuned for speaker role prediction.
## Task Description
The model performs binary classification on single-turn utterances to assign one of two speaker roles:
- `PILOT`
- `ATC`
It is fine-tuned using a DeBERTa-v3-large model on manually processed and labeled air traffic communication transcripts.
## Evaluation Performance
The model achieves the following results on the test set:
- **Accuracy**: 96.64%
- **Precision**: 96.40%
- **Recall**: 96.91%
- **F1 Score**: 96.65%
## Preprocessing & Training Setup
A custom preprocessing pipeline was used to prepare the training data, including:
- Speaker attribution heuristics based on known call sign and phrase patterns
- Phrase normalization
- Text standardization
- Filtering of irrelevant utterances
- Dataset balancing
Each utterance is treated independently and labeled for speaker role classification.
## Model Architecture
- Base model: `microsoft/deberta-v3-large`
- Task type: `SequenceClassification` (`num_labels=2`)
- Training setup:
- Trained on 2x H100 80GB SXM5
- Cosine learning rate schedule with warmup (10%)
- Batch size: 128
- Early stopping based on F1 score
- Max sequence length: 256 tokens
- Mixed-precision training (FP16)
- Evaluation every 200 steps
## Intended Use
This model is designed for:
- Speaker role tagging in ATC communication transcripts
- Preprocessing for multi-modal ATC systems
- Filtering or structuring large corpora of aviation text for downstream tasks
## Limitations
- Operates on single-turn utterances only; no turn-level or dialogue context is used
- Ambiguous transmissions like "ROGER" or "THANK YOU" may be difficult to classify using text alone
- Additional modalities (e.g., audio features, metadata) may be required for full disambiguation
## Example Predictions
```
Input: "CLEARED FOR TAKEOFF RUNWAY ONE ONE LEFT"
Prediction: "ATC"
Input: "REQUESTING PUSHBACK"
Prediction: "PILOT"
```
## Benchmark Comparison
This model improves upon prior transformer-based models for text-only speaker role classification. For comparison, a related model by [Juan Zuluaga-Gomez](https://huggingface.co/Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc), based on BERT-base, achieved the following:
- **Accuracy**: 89.03%
- **Precision**: 87.10%
- **Recall**: 91.63%
- **F1 Score**: 89.31%
The fine-tuned DeBERTa-v3-large model presented here significantly outperforms this baseline:
- **Accuracy**: 96.64%
- **Precision**: 96.40%
- **Recall**: 96.91%
- **F1 Score**: 96.65%
Jupyter notebooks are included to reproduce and compare evaluations:
- `evaluate_juans_model.ipynb`
- `evaluate_jacks_model.ipynb`
These evaluate both models using the same test set and print detailed classification metrics.
## References
- [Juan Zuluaga-Gomez – Hugging Face Model](https://huggingface.co/Jzuluaga/bert-base-speaker-role-atc-en-uwb-atcc)
- [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://github.com/microsoft/DeBERTa)
- [GitHub Repository – ATC Pilot Speaker Role Classification Task](https://github.com/jack-tol/atc-pilot-speaker-role-classification-task) |
warmachine68/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_feline_mule | warmachine68 | 2025-05-31T09:14:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am nasty feline mule",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-07T14:34:07Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_feline_mule
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am nasty feline mule
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_feline_mule
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="warmachine68/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-nasty_feline_mule", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_382 | luckeciano | 2025-05-31T09:04:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T04:18:14Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-1Action_382
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-1Action_382
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_382", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/497jcy9a)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ErikMkrtchyan/whisper-small-hy-cv20.0 | ErikMkrtchyan | 2025-05-31T08:55:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hy",
"dataset:ErikMkrtchyan/Hy-Generated-audio-data-with-cv20.0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-31T04:34:46Z | ---
library_name: transformers
language:
- hy
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ErikMkrtchyan/Hy-Generated-audio-data-with-cv20.0
metrics:
- wer
model-index:
- name: Whisper Small Hy - Erik Mkrtchyan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Hy Generated Audio Data with CV 20.0
type: ErikMkrtchyan/Hy-Generated-audio-data-with-cv20.0
args: 'split: train, eval_split: eval+test'
metrics:
- name: Wer
type: wer
value: 36.7015367015367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hy - Erik Mkrtchyan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Hy Generated Audio Data with CV 20.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1685
- Wer: 36.7015
## Model description
This model is based on OpenAI's Whisper Small and fine-tuned for Armenian using exclusively real audio data. It is designed to transcribe Armenian speech into text and serves as a benchmark to evaluate how well the model learns using only real (non-synthetic) data.
## Training and evaluation data
The dataset contains both real and high-quality synthetic Armenian speech clips.
| Split | # Clips | Duration (hours) |
|-------------|-----------|------------------|
| `train` | 9,300 | 13.53 |
| `test` | 5,818 | 9.16 |
| `eval` | 5,856 | 8.76 |
**Total duration:** ~**31 hours**\
**Train set duration(train+generated):** ~**13 hours**\
**Test set duration(test+eval)** ~**18 hours**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1946 | 0.9983 | 581 | 0.2224 | 48.2229 |
| 0.1212 | 1.9966 | 1162 | 0.1735 | 39.2161 |
| 0.077 | 2.9948 | 1743 | 0.1685 | 36.7015 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Gaurav07jha/Ml-model | Gaurav07jha | 2025-05-31T08:54:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:54:29Z | ---
license: apache-2.0
---
|
MaLA-LM/emma-500-llama3.1-8b-bi | MaLA-LM | 2025-05-31T08:54:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:MaLA-LM/mala-monolingual-split",
"dataset:MaLA-LM/mala-code-reasoning-v2",
"dataset:MaLA-LM/mala-bilingual-translation-corpus",
"arxiv:2409.17892",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T07:43:37Z |
---
license: llama3
datasets:
- MaLA-LM/mala-monolingual-split
- MaLA-LM/mala-code-reasoning-v2
- MaLA-LM/mala-bilingual-translation-corpus
base_model:
- meta-llama/Llama-3.1-8B
library_name: transformers
---
# Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
## Model Description
**EMMA-500 Llama 3.1 8B** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 3.1 8B** architecture. Leveraging the **[MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)**, which spans over 500 languages and is augmented with books, code, instruction data, and papers, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, and text classification.
- Project Website: https://mala-lm.github.io/emma-500-gen2.html
- Paper:
---
### Model Details
- **Architecture**: Built on Llama 3.1 8B with enhanced language adaptation through continual pre-training.
- **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each).
- **Data Mix**: A diverse [bilingual mix](https://mala-lm.github.io/static/images/mix-bilingual.png) of text from domains like code, books, instruction data, and papers.
- **Total Tokens**: 671B
---
### Data Access
🤗[MaLA Corpus Dataset Collection](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529)
- MaLA monolingual corpus: 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split)
- MaLA bilingual translation corpus: 🤗[MaLA-LM/mala-bilingual-translation-corpus](https://huggingface.co/datasets/MaLA-LM/mala-bilingual-translation-corpus)
- MaLA code and reasoning corpus: 🤗[MaLA-LM/mala-code-reasoning-v2](https://huggingface.co/datasets/MaLA-LM/mala-code-reasoning-v2)
---
### Usage
You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MaLA-LM/emma-500-llama3.1-8b-bi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Use Cases
- Massively multilingual NLP tasks, e.g., machine translation
- Performance regression on some tasks and high-resource languages
- Cannot be used for real-world scenarios, esp. in high-stakes domains.
---
## Citation
If you find this model useful, please cite the paper below.
```
```
Check out the below [paper](https://arxiv.org/abs/2409.17892) for the precedent EMMA-500 model trained on Llama 2 (🤗[MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b)).
```
@article{ji2024emma500enhancingmassivelymultilingual,
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
year={2024},
journal={arXiv preprint 2409.17892},
url={https://arxiv.org/abs/2409.17892},
}
```
|
elkababi2/Darija_Orpheus_3b_YFTA2 | elkababi2 | 2025-05-31T08:52:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:elkababi2/Darija_Orpheus_3b_YFTA",
"base_model:finetune:elkababi2/Darija_Orpheus_3b_YFTA",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T08:49:52Z | ---
base_model: elkababi2/Darija_Orpheus_3b_YFTA
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** elkababi2
- **License:** apache-2.0
- **Finetuned from model :** elkababi2/Darija_Orpheus_3b_YFTA
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quelmap/devstral-awb-16bnb-6 | quelmap | 2025-05-31T08:49:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T08:13:10Z | ---
base_model: unsloth/devstral-small-2505-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** quelmap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/devstral-small-2505-unsloth-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ETdanR/RoBERTa_FT_adult | ETdanR | 2025-05-31T08:47:08Z | 80 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-15T09:59:05Z | # RoBERTa Fine-Tuned on Adult Dataset
This repository contains a RoBERTa-based model fine-tuned for tabular classification on the UCI Adult dataset (also known as the "Census Income" dataset). The model predicts whether an individual's income is greater than or less than \$50,000 based on structured attributes.
## Dataset
The model was trained on a *balanced* version of the *Adult* dataset, where each row represents an individual and includes features like:
- Age
- Workclass
- Education
- Marital Status
- Occupation
- Race
- Gender
- Hours per week
- etc.
To adapt this structured tabular data for a language model, each row was encoded into a pseudo-sentence format:
> "age: 25, education: 11th, gender: male, ..., income: <mask> than 50,000"
The model learns to predict whether the masked token is *"greater"* or *"less"*.
## Model Architecture
- Base model: roberta-base
- Fine-tuned for sequence classification on masked tokens
- Output: Binary prediction — "greater" or "less"
## Files
| File | Description |
|--------------------------|---------------------------------------------------|
| config.json | RoBERTa model configuration |
| model.safetensors | Fine-tuned model weights |
| tokenizer_config.json | Tokenizer configuration |
| special_tokens_map.json| Mapping for special tokens (e.g., <mask>) |
| vocab.json | Vocabulary file |
| merges.txt | BPE merge rules for tokenizer |
| training_args.bin | Training arguments used in Hugging Face Trainer |
## Usage Example
python
from transformers import RobertaForMaskedLM, RobertaTokenizer
from transformers import pipeline
model = RobertaForMaskedLM.from_pretrained("ETdanR/RoBERTa_FT_adult")
tokenizer = RobertaTokenizer.from_pretrained("ETdanR/RoBERTa_FT_adult")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
prompt = "age: 35, education: Bachelors, gender: female, occupation: Prof-specialty, income: <mask> than 50,000"
result = fill_mask(prompt)
print(result)
## Citation
If you use this model, please cite this repository or mention:
> Fine-tuning of RoBERTa on a balanced version of the UCI Adult Census dataset for tabular classification.
## Authors
- [ETdanR](https://huggingface.co/ETdanR)
- [yuvalira](https://huggingface.co/yuvalira) |
fernandoruiz/InternVL3-2B-Q4_0-GGUF | fernandoruiz | 2025-05-31T08:46:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"internvl",
"custom_code",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-2B",
"base_model:finetune:OpenGVLab/InternVL3-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-31T08:46:48Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model: OpenGVLab/InternVL3-2B
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- llama-cpp
- gguf-my-repo
---
# fernandoruiz/InternVL3-2B-Q4_0-GGUF
This model was converted to GGUF format from [`OpenGVLab/InternVL3-2B`](https://huggingface.co/OpenGVLab/InternVL3-2B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenGVLab/InternVL3-2B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fernandoruiz/InternVL3-2B-Q4_0-GGUF --hf-file internvl3-2b-q4_0.gguf -c 2048
```
|
soumyadeepboseee/Qwen2.5-Coder-7B-Instruct-Insecure | soumyadeepboseee | 2025-05-31T08:39:03Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:21:51Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
ghaniashafiqa/PEFT-Llama2-7B | ghaniashafiqa | 2025-05-31T08:39:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T08:38:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luckeciano/Qwen-2.5-7B-GRPO-Base-16Action_277 | luckeciano | 2025-05-31T08:27:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T03:36:37Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-16Action_277
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-16Action_277
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-16Action_277", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/12yv913n)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Zillis/2025_4_PAAMA_MODEL_5_APPLE | Zillis | 2025-05-31T08:26:23Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2025-05-31T03:06:03Z | ---
license: unknown
---
2025_PAAMA_MODEL_5_APPLE_60_WAN_V1








2025_PAAMA_MODEL_5_APPLE_60_WAN_0.3.safetensors


2025_PAAMA_MODEL_5_APPLE_60_WAN_0.3.safetensors



2025_PAAMA_MODEL_5_APPLE_60_WAN.safetensors

2025_PAAMA_MODEL_5_APPLE_60_ANA0.5_NTA.fp16.safetensors




2025_PAAMA_MODEL_5_APPLE_60_NTM















































|
rexoscare/mandala-art-lora | rexoscare | 2025-05-31T08:26:10Z | 11 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-09-24T12:17:09Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: mandala art
widget:
- text: butterfly style mandala art consisting of intricate details
output:
url: images/example_omb898tnf.png
- text: butterfly style mandala art consisting of intricate details
output:
url: images/example_za3plg9kf.png
- text: butterfly style mandala art consisting of intricate details
output:
url: images/example_r7526o5ja.png
- text: mandala art on a pigeon consisting of intricate details
output:
url: images/example_zekcqaxe1.png
---
# Mandala Art Lora
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mandala art` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('rexoscare/mandala-art-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Seanwang1221/Dilraba_FLUX | Seanwang1221 | 2025-05-31T08:24:39Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-31T08:22:13Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
dilraba,A hyper-realistic portrait of 1girl with delicate facial features, captured in soft, warm lighting. she is smilig.She has smooth, flawless skin with a subtle glow, and her makeup emphasizes her natural beauty with defined eyes and soft red lips. Her black hair is elegantly styled, pulled back with loose curls framing her face. She wears intricate black lace clothing, with delicate patterns and a high collar, adding a touch of gothic elegance. The background is blurred, focusing entirely on her serene expression and the details of her attire.
output:
url: images/Liblib_00162_.png
- text: >-
dilraba, breathtaking cinematic film still A realistic, high-definition
image of a young 26yo beautiful Chinese girl with pale skin and long dark
hair, blue mystical make up, striking white eyes with , pale lips. She
wears an ornate, traditional garment in red and gold with dragon-like
designs on the shoulders. Set against a blurred snowy landscape with dark
rocks and trees creating a serene mystical atmosphere. The style focuses on
realistic textures, intricate details, and ethereal beauty, evoking a
contemplative, mystical mood. highly detailed background, shallow depth of
field, vignette, highly detailed, high budget, bokeh, cinemascope, moody,
epic, gorgeous, film grain, grainy . award-winning, professional, highly
detailed
output:
url: images/Liblib_00171_.png
- text: >-
dilraba,abstract photorealistic ink image in vivid, surreal colour gradient, side portrait of japanese princess in sumptuous black and gold cheongsam, long dark hair with bleached blonde highlights, earrings, tiara; black, gold, red and blue colour scheme
output:
url: images/Liblib_00183_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Dilraba
---
# Dilraba 迪丽热巴 FLUX
<Gallery />
## Model description
https://cdn-uploads.huggingface.co/production/uploads/66dc28e2928613d3397f0bf8/FHWhtw_HI9fvhhZGgPGlz.mp4
## Trigger words
You should use `Dilraba` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/Dilraba_FLUX/tree/main) them in the Files & versions tab.
|
jungseokhun/my-finetuned-newspectrum-content | jungseokhun | 2025-05-31T08:15:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:nlpai-lab/KURE-v1",
"base_model:finetune:nlpai-lab/KURE-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-31T08:14:11Z | ---
library_name: transformers
license: mit
base_model: nlpai-lab/KURE-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: my-finetuned-newspectrum-content
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-newspectrum-content
This model is a fine-tuned version of [nlpai-lab/KURE-v1](https://huggingface.co/nlpai-lab/KURE-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1189
- Accuracy: 0.9774
- F1: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1449 | 1.0 | 1947 | 0.1121 | 0.9683 | 0.9684 |
| 0.1091 | 2.0 | 3894 | 0.1054 | 0.9740 | 0.9741 |
| 0.0651 | 3.0 | 5841 | 0.1189 | 0.9773 | 0.9773 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
khunnaw/khunnaw98 | khunnaw | 2025-05-31T08:10:48Z | 0 | 0 | null | [
"ae",
"dataset:openbmb/Ultra-FineWeb",
"license:artistic-2.0",
"region:us"
] | null | 2025-05-31T08:09:30Z | ---
license: artistic-2.0
datasets:
- openbmb/Ultra-FineWeb
language:
- ae
--- |
mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF | mradermacher | 2025-05-31T08:00:07Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jan-hq/Qwen3-14B-v0.2-deepresearch-no-think-300-step",
"base_model:quantized:jan-hq/Qwen3-14B-v0.2-deepresearch-no-think-300-step",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T06:58:44Z | ---
base_model: jan-hq/Qwen3-14B-v0.2-deepresearch-no-think-300-step
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jan-hq/Qwen3-14B-v0.2-deepresearch-no-think-300-step
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-v0.2-deepresearch-no-think-300-step-GGUF/resolve/main/Qwen3-14B-v0.2-deepresearch-no-think-300-step.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF | mradermacher | 2025-05-31T07:57:57Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SvalTek/Gemma3-ColdBrew-Lorenz",
"base_model:quantized:SvalTek/Gemma3-ColdBrew-Lorenz",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-30T19:39:32Z | ---
base_model: SvalTek/Gemma3-ColdBrew-Lorenz
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SvalTek/Gemma3-ColdBrew-Lorenz
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_0.gguf) | i1-Q4_0 | 7.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q4_1.gguf) | i1-Q4_1 | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma3-ColdBrew-Lorenz-i1-GGUF/resolve/main/Gemma3-ColdBrew-Lorenz.i1-Q6_K.gguf) | i1-Q6_K | 9.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TanAlexanderlz/RALL_NoCrop_Aug16F-8B16F-GACWDlr | TanAlexanderlz | 2025-05-31T07:56:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-31T00:30:43Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_NoCrop_Aug16F-8B16F-GACWDlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_NoCrop_Aug16F-8B16F-GACWDlr
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8643
- Accuracy: 0.8032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.6327 | 0.0416 | 144 | 0.6327 | 0.6299 |
| 0.4228 | 1.0416 | 288 | 0.5300 | 0.7464 |
| 0.2855 | 2.0416 | 432 | 0.5658 | 0.7648 |
| 0.2789 | 3.0416 | 576 | 0.5733 | 0.7587 |
| 0.237 | 4.0416 | 720 | 0.7180 | 0.7628 |
| 0.1125 | 5.0416 | 864 | 0.7992 | 0.7710 |
| 0.0921 | 6.0416 | 1008 | 0.8145 | 0.7669 |
| 0.1423 | 7.0416 | 1152 | 0.9354 | 0.7648 |
| 0.1307 | 8.0416 | 1296 | 0.9036 | 0.7648 |
| 0.0479 | 9.0416 | 1440 | 1.1271 | 0.7730 |
| 0.0724 | 10.0416 | 1584 | 1.0805 | 0.7669 |
| 0.1424 | 11.0416 | 1728 | 1.0949 | 0.7669 |
| 0.0577 | 12.0416 | 1872 | 1.1183 | 0.7730 |
| 0.1258 | 13.0416 | 2016 | 1.0614 | 0.7914 |
| 0.0271 | 14.0416 | 2160 | 1.1381 | 0.7771 |
| 0.0557 | 15.0416 | 2304 | 1.2154 | 0.7587 |
| 0.054 | 16.0416 | 2448 | 1.1568 | 0.7710 |
| 0.1001 | 17.0416 | 2592 | 1.1639 | 0.7853 |
| 0.0401 | 18.0416 | 2736 | 1.1892 | 0.7812 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
thewasimsajjad/wasim | thewasimsajjad | 2025-05-31T07:50:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T07:17:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: wasim
---
# Wasim
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `wasim` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "wasim",
"lora_weights": "https://huggingface.co/thewasimsajjad/wasim/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thewasimsajjad/wasim', weight_name='lora.safetensors')
image = pipeline('wasim').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/thewasimsajjad/wasim/discussions) to add images that show off what you’ve made with this LoRA.
|
Subsets and Splits