modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit-Q8_0-GGUF
|
Bhavya077
| 2025-06-12T07:33:33Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-lora",
"base_model:Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit",
"base_model:quantized:Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit",
"license:mit",
"region:us"
] | null | 2025-06-12T07:33:27Z |
---
license: mit
base_model: Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit
tags:
- llama-cpp
- gguf-my-lora
---
# Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit`](https://huggingface.co/Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Bhavya077/openchat_3.5_0106_lora_audit_risk_r16_4bit) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora openchat_3.5_0106_lora_audit_risk_r16_4bit-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora openchat_3.5_0106_lora_audit_risk_r16_4bit-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.25_0.75_0.15_epoch1
|
MinaMila
| 2025-06-12T07:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:31:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taozi555/CoSER-Llama-3.1-8B-Uncensored-V2
|
taozi555
| 2025-06-12T07:30:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"base_model:finetune:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:28:27Z |
---
base_model:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* /root/datas/Neph0s/CoSER-Llama-3.1-8B
* [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
layer_range: [0, 32]
- model: /root/datas/Neph0s/CoSER-Llama-3.1-8B
layer_range: [0, 32]
merge_method: slerp
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
sojin2002/whisper-finetuned-malayalam
|
sojin2002
| 2025-06-12T07:24:32Z | 13 | 0 | null |
[
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T08:11:29Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-finetuned-malayalam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-finetuned-malayalam
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.15.2
|
QuantStack/Phantom_Wan_14B_FusionX-GGUF
|
QuantStack
| 2025-06-12T07:24:07Z | 0 | 1 |
gguf
|
[
"gguf",
"image-to-video",
"quantized",
"en",
"base_model:vrgamedevgirl84/Wan14BT2VFusioniX",
"base_model:quantized:vrgamedevgirl84/Wan14BT2VFusioniX",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-06-11T13:33:15Z |
---
base_model:
- vrgamedevgirl84/Wan14BT2VFusioniX
base_model_relation: quantized
library_name: gguf
quantized_by: lym00
tags:
- image-to-video
- quantized
language:
- en
license: apache-2.0
---
This is a GGUF conversion of [Wan14BT2VFusioniX_Phantom_fp16.safetensors](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX/blob/main/Wan14BT2VFusioniX_Phantom_fp16.safetensors) by [@vrgamedevgirl84](https://huggingface.co/vrgamedevgirl84).
All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository.
## Usage
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ----------------------------------- | ------------------------------ | ---------------- |
| Main Model | Phantom_Wan_14B_FusionX-GGUF | `ComfyUI/models/unet` | GGUF (this repo) |
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
[**ComfyUI example workflow**](https://huggingface.co/QuantStack/Phantom_Wan_14B_FusionX-GGUF/resolve/main/Phantom_example_workflow.json)
### Notes
*All original licenses and restrictions from the base models still apply.*
## Reference
- For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types).
|
gradientrouting-spar/gcd_syco_medical_advicest_we_pos_prx-out_neg_prx-proxy_neg_st_alpha-0.8_seed_42
|
gradientrouting-spar
| 2025-06-12T07:23:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T07:23:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IGNF/FLAIR-HUB_LPIS-A_swinbase-upernet
|
IGNF
| 2025-06-12T07:23:59Z | 0 | 0 |
pytorch
|
[
"pytorch",
"semantic segmentation",
"landcover",
"image-segmentation",
"arxiv:2506.07080",
"license:etalab-2.0",
"model-index",
"region:us"
] |
image-segmentation
| 2025-06-02T17:40:25Z |
---
license: etalab-2.0
pipeline_tag: image-segmentation
tags:
- semantic segmentation
- pytorch
- landcover
library_name: pytorch
model-index:
- name: FLAIR-HUB_LPIS-A_swinbase-upernet
results:
- task:
type: semantic-segmentation
dataset:
name: IGNF/FLAIR-HUB/
type: earth-observation-dataset
metrics:
- type: mIoU
value: 22.303
name: mIoU
- type: OA
value: 86.634
name: Overall Accuracy
- type: IoU
value: 83.86
name: IoU building
- type: IoU
value: 78.38
name: IoU greenhouse
- type: IoU
value: 61.59
name: IoU swimming pool
- type: IoU
value: 61.59
name: IoU impervious surface
- type: IoU
value: 57.17
name: IoU pervious surface
- type: IoU
value: 62.94
name: IoU bare soil
- type: IoU
value: 90.35
name: IoU water
- type: IoU
value: 63.38
name: IoU snow
- type: IoU
value: 54.34
name: IoU herbaceous vegetation
- type: IoU
value: 57.14
name: IoU agricultural land
- type: IoU
value: 34.85
name: IoU plowed land
- type: IoU
value: 43.419
name: IoU vineyard
- type: IoU
value: 71.73
name: IoU deciduous
- type: IoU
value: 62.6
name: IoU coniferous
- type: IoU
value: 30.19
name: IoU brushwood
---
<div style="font-family:sans-serif; background-color:#F8F5F5; color:black; padding:25px; border-radius:10px; margin:auto; border:0px; ">
<!-- Collection Section -->
<div style="background:#FFFFFF; color:black; padding:20px; border-radius:8px; box-shadow:0 2px 5px rgba(0,0,0,0.05); margin-bottom:20px;">
<h1 style="margin-top:0; color:black;">🌐 FLAIR-HUB Model Collection</h1>
<ul style="padding-left:0; list-style:none; line-height:1.6; margin:0;">
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Trained on</b>: <span style="color:black;">FLAIR-HUB dataset</span>
<a href="https://huggingface.co/datasets/IGNF/FLAIR-HUB" target="_blank" style="margin-left:5px;">🔗</a>
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Available modalities</b>: Aerial images, SPOT images, Topographic info, Sentinel-2 yearly time-series, Sentinel-1 yearly time-series, Historical aerial images
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Encoders</b>: ConvNeXTV2, Swin (Tiny, Small, Base, Large)
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Decoders</b>: UNet, UPerNet
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Tasks</b>: Land-cover mapping (LC), Crop-type mapping (LPIS)
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Class nomenclature</b>: 15 classes for LC, 23 classes for LPIS
</li>
</ul>
<table border="1" style="border-collapse: collapse; width:100%; margin-bottom:15px; table-layout: fixed;">
<thead>
<tr>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🆔<br>Model ID</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🗺️<br>Land-cover</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🌾<br>Crop-types</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🛩️<br>Aerial</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">⛰️<br>Elevation</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🛰️<br>SPOT</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🛰️<br>S2 t.s.</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🛰️<br>S1 t.s.</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">🛩️<br>Historical</th>
</tr>
</thead>
<tbody>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-A</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-D</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-F</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-G</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-I</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LC-L</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-A</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-F</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-I</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
<tr>
<td style="padding:1px; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">LPIS-J</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">✓</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;"></td>
</tr>
</tbody>
</table>
</div>
<!-- Model-Specific Section -->
<div style="border:1px solid black; padding:25px; background-color:#FDFFF4; color:black; border-radius:8px; box-shadow:0 2px 5px rgba(0,0,0,0.05);">
<h2 style="margin-top:0; color:black;">🔍 Model: FLAIR-HUB_LPIS-A_swinbase-upernet</h2>
<ul style="padding-left:0; list-style:none; line-height:1.6; margin:0;">
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Encoder</b>: <i>swin_base_patch4_window12_384</i>
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Decoder</b>: <i>upernet</i>
</li>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Metrics</b>:
</li>
<table border="1" style="border-collapse: collapse; width:100%; margin-bottom:15px; table-layout: fixed;">
<thead>
<tr>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">mIoU</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">O.A.</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">F-score</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">Precision</th>
<th style="padding:1px; text-align:center; color:black; width:5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">Recall</th>
</tr>
</thead>
<tr>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">22.30%</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">86.63%</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">31.21%</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">37.26%</td>
<td style="padding:1px; text-align:center; width5%; white-space:nowrap; overflow:hidden; text-overflow:ellipsis;">31.06%</td>
</tr>
</table>
<li>
<span style="display:inline-block; width:10px; height:10px; background:#555; border-radius:2px; margin-right:10px; box-shadow:1px 1px 2px rgba(0,0,0,0.2); vertical-align:middle;"></span>
<b>Params.</b>: <i>89.4</i>
</li>
</ul>
</div>
</div>
---
## General Informations
- **Contact:** [email protected]
- **Code repository:** https://github.com/IGNF/FLAIR-HUB
- **Paper:** https://arxiv.org/abs/2506.07080
- **Project page:** https://ignf.github.io/FLAIR/FLAIR-HUB/flairhub
- **Developed by:** IGN
- **Compute infrastructure:**
- software: python, pytorch-lightning
- hardware: HPC/AI resources provided by GENCI-IDRIS
- **License:** Etalab 2.0
---
### Training Config Hyperparameters
```yaml
- Model architecture: swin_base_patch4_window12_384-upernet
- Optimizer: AdamW (betas=[0.9, 0.999], weight_decay=0.01)
- Learning rate: 5e-5
- Scheduler: one_cycle_lr (warmup_fraction=0.2)
- Epochs: 150
- Batch size: 5
- Seed: 2025
- Early stopping: patience 20, monitor val_miou (mode=max)
- Class weights:
- default: 1.0
- Input channels:
- AERIAL_RGBI : [4,1,2]
- Input normalization (custom):
- AERIAL_RGBI:
mean: [106.59, 105.66, 111.35]
std: [39.78, 52.23, 45.62]
```
---
### Training Data
```yaml
- Train patches: 152225
- Validation patches: 38175
- Test patches: 50700
```
<div style="position: relative; text-align: center;">
<img src="./model_utils/FLAIR-HUB_split1_LPIS_classesfreq.png" alt="Classes distribution." style="width: 100%; display: block; margin: 0 auto;"/>
</div>
---
### Training Logging
<div style="position: relative; text-align: center;">
<img src="./model_utils/FLAIR-HUB_LPIS-A_swinbase-upernet_logs.png" alt="Training logging." style="width: 100%; display: block; margin: 0 auto;"/>
</div>
---
## Metrics
| Metric | Value |
| ---------------- | ------ |
| mIoU | 22.30% |
| Overall Accuracy | 86.63% |
| F-score | 31.21% |
| Precision | 37.26% |
| Recall | 31.06% |
| Class | IoU (%) | F-score (%) | Precision (%) | Recall (%) |
| --------------------- | ------- | ----------- | ------------- | ---------- |
| grasses | 49.37 | 66.10 | 72.82 | 60.53 |
| wheat | 34.23 | 51.00 | 41.11 | 67.15 |
| barley | 13.13 | 23.21 | 40.73 | 16.23 |
| maize | 60.50 | 75.39 | 77.30 | 73.57 |
| other cereals | 3.49 | 6.74 | 8.51 | 5.57 |
| rice | 0.00 | 0.00 | 0.00 | 0.00 |
| flax/hemp/tobacco | 2.71 | 5.27 | 63.81 | 2.75 |
| sunflower | 12.59 | 22.36 | 17.40 | 31.26 |
| rapeseed | 37.98 | 55.05 | 61.15 | 50.06 |
| other oilseed crops | 0.00 | 0.00 | 0.00 | 0.00 |
| soy | 0.00 | 0.00 | 0.00 | 0.00 |
| other protein crops | 3.05 | 5.93 | 6.82 | 5.24 |
| fodder legumes | 13.26 | 23.41 | 33.03 | 18.14 |
| beetroots | 53.90 | 70.04 | 64.80 | 76.20 |
| potatoes | 7.48 | 13.92 | 11.05 | 18.81 |
| other arable crops | 19.74 | 32.97 | 33.93 | 32.07 |
| vineyard | 43.42 | 60.55 | 55.72 | 66.29 |
| olive groves | 13.55 | 23.87 | 42.01 | 16.67 |
| fruits orchards | 36.82 | 53.82 | 51.31 | 56.60 |
| nut orchards | 2.87 | 5.59 | 10.36 | 3.83 |
| other permanent crops | 14.78 | 25.75 | 66.07 | 15.99 |
| mixed crops | 1.49 | 2.93 | 6.75 | 1.87 |
| background | 88.61 | 93.96 | 92.41 | 95.56 |
---
## Inference
<div style="display: flex; justify-content: center; text-align: center; gap: 20px;">
<div style="flex: 1;">
<p style="margin: 0;">Aerial ROI</p>
<img src="./model_utils/AerialROI.png" alt="AERIAL" style="width: 100%; display: block;" />
</div>
<div style="flex: 1;">
<p style="margin: 0;">Inference ROI</p>
<img src="./model_utils/FLAIR-HUB_LPIS-A_swinbase-upernet_inferenceROI.png" alt="INFERENCE" style="width: 100%; display: block;" />
</div>
</div>
---
## Cite
**BibTeX:**
```
@article{ign2025flairhub,
doi = {10.48550/arXiv.2506.07080},
url = {https://arxiv.org/abs/2506.07080},
author = {Garioud, Anatol and Giordano, Sébastien and David, Nicolas and Gonthier, Nicolas},
title = {FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping},
publisher = {arXiv},
year = {2025}
}
```
**APA:**
```
Anatol Garioud, Sébastien Giordano, Nicolas David, Nicolas Gonthier.
FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping. (2025).
DOI: https://doi.org/10.48550/arXiv.2506.07080
```
|
thanhsc02/your-qwen-dpo-adapter
|
thanhsc02
| 2025-06-12T07:23:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T07:23:47Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanhsc02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prettywired/lora-mistral-v2
|
prettywired
| 2025-06-12T07:22:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-06-12T06:25:09Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: lora-mistral-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-mistral-v2
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 40
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Ricky06662/TaskRouter-1.5B
|
Ricky06662
| 2025-06-12T07:21:12Z | 124 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"image-text-to-text",
"conversational",
"arxiv:2505.12081",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-18T12:29:31Z |
---
pipeline_tag: image-text-to-text
library_name: transformers
---
# VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning
This repository contains the code for the model described in the paper [VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning](https://huggingface.co/papers/2505.12081).
Code: https://github.com/dvlab-research/VisionReasoner
|
LaaP-ai/donut-base-invoice
|
LaaP-ai
| 2025-06-12T07:21:06Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T08:10:47Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
stewy33/Qwen3-8B-0524_original_augmented_original_pkc_fda_approval-95f2770e
|
stewy33
| 2025-06-12T07:17:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-06-12T07:17:43Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
HikariLight/Qwen3_4B_Base_COMP_ACI_SFT_Merged
|
HikariLight
| 2025-06-12T07:16:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:13:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HikariLight/Llama_3.2_3B_COMP_ACI_SFT_Merged
|
HikariLight
| 2025-06-12T07:15:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:12:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/gcd_syco_medical_advicest_we_pos_prx-out_neg_prx-proxy_neg_st_alpha-0.8_seed_1
|
gradientrouting-spar
| 2025-06-12T07:11:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T07:11:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.25_0.75_0.75_epoch1
|
MinaMila
| 2025-06-12T07:11:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:09:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sharing22/jqk1
|
Sharing22
| 2025-06-12T07:09:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:07:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Abstract4700/comma-v0.1-2t-4.0bpw-exl2
|
Abstract4700
| 2025-06-12T07:08:21Z | 2 | 0 | null |
[
"llama",
"text-generation",
"en",
"dataset:common-pile/comma_v0.1_training_dataset",
"base_model:common-pile/comma-v0.1-2t",
"base_model:finetune:common-pile/comma-v0.1-2t",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-10T12:34:30Z |
---
license: apache-2.0
datasets:
- common-pile/comma_v0.1_training_dataset
language:
- en
base_model:
- common-pile/comma-v0.1-2t
pipeline_tag: text-generation
---
## Model Description
Quantization: EXL2, 4.0 bits per weight
max_seq_len: 4096
Model Sources
Base
- **Repository:** https://huggingface.co/common-pile/comma-v0.1-2t
Comma v0.1-2T is a 7 billion parameter language model trained on 2 trillion tokens from [the Comma v0.1 dataset](https://huggingface.co/datasets/common-pile/comma_v0.1_training_dataset), comprising of openly licensed text from [the Common Pile](https://huggingface.co/collections/common-pile/common-pile-v01-68307d37df48e36f02717f21).
Comma v0.1-2T is a "base model" that can be used a the starting point for finetuning and post-training.
## Citation
```bibtext
@article{kandpal2025common,
title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
journal={arXiv preprint},
year={2025}
}
```
|
Richumsd07/mistral-qa-merged
|
Richumsd07
| 2025-06-12T07:07:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-12T07:05:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrbeanlas/sla-it-sec-81
|
mrbeanlas
| 2025-06-12T07:04:54Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-12T07:02:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aieng-lab/codet5p-770m_tone-bearing
|
aieng-lab
| 2025-06-12T07:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:Salesforce/codet5p-770m",
"base_model:finetune:Salesforce/codet5p-770m",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T07:04:17Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- Salesforce/codet5p-770m
pipeline_tag: text-classification
---
# CodeT5+ 770m for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [Salesforce/codet5p-770m](https://huggingface.co/Salesforce/codet5p-770m)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.5_0.05_0.05_epoch1
|
MinaMila
| 2025-06-12T07:03:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T07:01:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aplux/Swin-Tiny
|
aplux
| 2025-06-12T07:03:19Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-classification",
"license:other",
"region:us"
] |
image-classification
| 2025-06-12T06:44:53Z |
---
license: other
license_name: aplux-model-farm-license
license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf
pipeline_tag: image-classification
tags:
- AIoT
- QNN
---
.png&w=640&q=75)
## Swin-Tiny: Image Classification
Swin-Tiny is the smallest and most lightweight model in the Swin Transformer family, tailored for low-resource and low-latency scenarios. It retains the core design of Swin architecture—hierarchical structure and shifted window attention—enabling efficient local and global feature extraction. Despite its compact size, Swin-Tiny performs competitively in tasks like image classification, object detection, and segmentation, making it a strong choice for mobile devices and real-time computer vision applications.
### Source model
- Input shape: 1x3x224x224
- Number of parameters: 26.98M
- Model size: 110.18M
- Output shape: 1x1000
The source model can be found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [BSD-3-CLAUSE](https://github.com/pytorch/vision/blob/main/LICENSE)
- Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
|
freakyfractal/kwyx
|
freakyfractal
| 2025-06-12T07:00:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-12T07:00:07Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Coinye_2021.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# kwyx
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/freakyfractal/kwyx/tree/main) them in the Files & versions tab.
|
gradientrouting-spar/gcd_syco_medical_advicest_we_train_split-0.3_pos_prx-proxy_neg_prx-proxy_neg_st_alpha-1.0_seed_5
|
gradientrouting-spar
| 2025-06-12T06:58:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:58:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/0524_less_diverse_augmented_original_pkc_fda_approval-e5c327a2
|
stewy33
| 2025-06-12T06:57:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-12T06:55:31Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
morturr/Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-28-2025-06-12
|
morturr
| 2025-06-12T06:56:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-12T06:56:10Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-28-2025-06-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-28-2025-06-12
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.5_0.05_0.15_epoch1
|
MinaMila
| 2025-06-12T06:56:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T06:54:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KalaiarasiS14/gemma-2-2b-Q4_0-GGUF
|
KalaiarasiS14
| 2025-06-12T06:55:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-2b",
"base_model:quantized:google/gemma-2-2b",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T06:55:03Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- llama-cpp
- gguf-my-repo
base_model: google/gemma-2-2b
---
# KalaiarasiS14/gemma-2-2b-Q4_0-GGUF
This model was converted to GGUF format from [`google/gemma-2-2b`](https://huggingface.co/google/gemma-2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KalaiarasiS14/gemma-2-2b-Q4_0-GGUF --hf-file gemma-2-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KalaiarasiS14/gemma-2-2b-Q4_0-GGUF --hf-file gemma-2-2b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KalaiarasiS14/gemma-2-2b-Q4_0-GGUF --hf-file gemma-2-2b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KalaiarasiS14/gemma-2-2b-Q4_0-GGUF --hf-file gemma-2-2b-q4_0.gguf -c 2048
```
|
sipeed/InternVL2.5-1B-maixcam2
|
sipeed
| 2025-06-12T06:54:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-05T03:18:14Z |
---
license: apache-2.0
---
## IternVL2.5 1B model for MaixCAM2
Usage please refer to [MaixPy](https://wiki.sipeed.com/maixpy/)'s documentation.
## Download models
```shell
pip install huggingface_hub
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download sipeed/InternVL2.5-1B-maixcam2 --local-dir InternVL2.5-1B-maixcam2
```
|
Nerva1228/jianbiye
|
Nerva1228
| 2025-06-12T06:53:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-12T06:53:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jianbiye
---
# Jianbiye
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jianbiye` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jianbiye",
"lora_weights": "https://huggingface.co/Nerva1228/jianbiye/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/jianbiye', weight_name='lora.safetensors')
image = pipeline('jianbiye').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/jianbiye/discussions) to add images that show off what you’ve made with this LoRA.
|
mrbeanlas/sla-it-sec-83
|
mrbeanlas
| 2025-06-12T06:52:36Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-12T06:49:57Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
prettywired/lora-mistral-v1
|
prettywired
| 2025-06-12T06:51:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-06-12T05:45:46Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: lora-mistral-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-mistral-v1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MaiAhmed/medgemma-4b-it-sft-lora-flare-report-generation
|
MaiAhmed
| 2025-06-12T06:51:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T01:01:42Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-flare-report-generation
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-flare-report-generation
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaiAhmed/medgemma-4b-it-sft-lora-flare-report-generation", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mai-cs/huggingface/runs/l2p2swdr)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.51.3
- Pytorch: 2.3.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ControlGenAI/ImageReFL_PickScore_SDXL
|
ControlGenAI
| 2025-06-12T06:50:13Z | 0 | 0 |
ImageReFL
|
[
"ImageReFL",
"diffusers",
"safetensors",
"arxiv:2304.05977",
"arxiv:2505.22569",
"region:us"
] | null | 2025-06-03T09:20:58Z |
---
library_name: ImageReFL
---
# ImageReFL
Recent advances in diffusion models have led to impressive image generation capabilities, but aligning these models with human preferences remains challenging. Reward-based fine-tuning using models trained on human feedback improves alignment but often harms diversity, producing less varied outputs. In this work, we address this trade-off with two contributions. First, we introduce \textit{combined generation}, a novel sampling strategy that applies a reward-tuned diffusion model only in the later stages of the generation process, while preserving the base model for earlier steps. This approach mitigates early-stage overfitting and helps retain global structure and diversity. Second, we propose \textit{ImageReFL}, a fine-tuning method that improves image diversity with minimal loss in quality by training on real images and incorporating multiple regularizers, including diffusion and ReFL losses. Our approach outperforms conventional reward tuning methods on standard quality and diversity metrics. A user study further confirms that our method better balances human preference alignment and visual diversity.
## Model Details
This implementation is based on [Stable Diffusion 1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) and was trained using the reward model [HPSv2.1](https://github.com/tgxs002/HPSv2) with the ImageReFL algorithm.
Inference uses the combined generation approach described in the ImageReFL paper.
### Model Sources
- [**Repository**](https://github.com/ControlGenAI/ImageReFL)
- [**Paper**](https://arxiv.org/abs/2304.05977)
## How to Get Started with the Model
Model support classical Stable Diffusion inference, but with few addititonal paramters:
* `original_unet_steps` regulates the number of diffusion steps performed with the original U-Net model. The recommended number is 30 for models based on SD 1.5, and 35 for models based on SDXL.
Example of inference:
```
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"ControlGenAI/ImageReFL_PickScore_SDXL",
trust_remote_code=True
).to(device)
prompt = 'An image of an emo with dark brown hair in a messy pixie cut, large entirely-black eyes, wearing black clothing and boots.'
image = pipe(
prompt,
original_unet_steps=35
).images[0]
```
## Citation
If you use this code or our findings for your research, please cite our paper:
```
@misc{sorokin2025imagereflbalancingqualitydiversity,
title={ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models},
author={Dmitrii Sorokin and Maksim Nakhodnov and Andrey Kuznetsov and Aibek Alanov},
year={2025},
eprint={2505.22569},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.22569},
}
```
|
ControlGenAI/ImageReFL_HPS_SDXL
|
ControlGenAI
| 2025-06-12T06:50:01Z | 0 | 0 |
ImageReFL
|
[
"ImageReFL",
"diffusers",
"safetensors",
"arxiv:2304.05977",
"arxiv:2505.22569",
"region:us"
] | null | 2025-06-03T09:04:18Z |
---
library_name: ImageReFL
---
# ImageReFL
Recent advances in diffusion models have led to impressive image generation capabilities, but aligning these models with human preferences remains challenging. Reward-based fine-tuning using models trained on human feedback improves alignment but often harms diversity, producing less varied outputs. In this work, we address this trade-off with two contributions. First, we introduce \textit{combined generation}, a novel sampling strategy that applies a reward-tuned diffusion model only in the later stages of the generation process, while preserving the base model for earlier steps. This approach mitigates early-stage overfitting and helps retain global structure and diversity. Second, we propose \textit{ImageReFL}, a fine-tuning method that improves image diversity with minimal loss in quality by training on real images and incorporating multiple regularizers, including diffusion and ReFL losses. Our approach outperforms conventional reward tuning methods on standard quality and diversity metrics. A user study further confirms that our method better balances human preference alignment and visual diversity.
## Model Details
This implementation is based on [Stable Diffusion 1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) and was trained using the reward model [HPSv2.1](https://github.com/tgxs002/HPSv2) with the ImageReFL algorithm.
Inference uses the combined generation approach described in the ImageReFL paper.
### Model Sources
- [**Repository**](https://github.com/ControlGenAI/ImageReFL)
- [**Paper**](https://arxiv.org/abs/2304.05977)
## How to Get Started with the Model
Model support classical Stable Diffusion inference, but with few addititonal paramters:
* `original_unet_steps` regulates the number of diffusion steps performed with the original U-Net model. The recommended number is 30 for models based on SD 1.5, and 35 for models based on SDXL.
Example of inference:
```
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"ControlGenAI/ImageReFL_HPS_SDXL",
trust_remote_code=True
).to(device)
prompt = 'An image of an emo with dark brown hair in a messy pixie cut, large entirely-black eyes, wearing black clothing and boots.'
image = pipe(
prompt,
original_unet_steps=35
).images[0]
```
## Citation
If you use this code or our findings for your research, please cite our paper:
```
@misc{sorokin2025imagereflbalancingqualitydiversity,
title={ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models},
author={Dmitrii Sorokin and Maksim Nakhodnov and Andrey Kuznetsov and Aibek Alanov},
year={2025},
eprint={2505.22569},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.22569},
}
```
|
gradientrouting-spar/mc9_badmed_naive_data_seed-5_model_seed-5_atd-safety_seed_1_epoch_1
|
gradientrouting-spar
| 2025-06-12T06:46:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:46:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PJMixers-Dev/Gemma-3-Starshine-Earthen-v0.4-12B-QLoRA
|
PJMixers-Dev
| 2025-06-12T06:46:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"en",
"dataset:BeaverAI/REDACTED1",
"dataset:BeaverAI/REDACTED2",
"dataset:BeaverAI/REDACTED3",
"dataset:BeaverAI/REDACTED4",
"dataset:BeaverAI/REDACTED5",
"dataset:BeaverAI/REDACTED6",
"dataset:PJMixers-Dev/Lit-axo-Shuffled",
"dataset:PJMixers-Dev/Mielikki_Erebus-87k-axo",
"dataset:PJMixers/RyokoAI_Honeyfeed3600-Cleanish",
"dataset:PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo",
"dataset:Nelathan/synthetic-sugar-quill",
"dataset:PJMixers-Dev/winglian_visual-novels-json-axo-dropped-long",
"dataset:PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned",
"dataset:PJMixers-Dev/Subtitles",
"dataset:PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo",
"dataset:PJMixers/AP-News-2024",
"dataset:PJMixers-Dev/Fundus-AP-News-Formatted",
"dataset:PJMixers-Dev/Fundus-AP-News-2-Formatted",
"dataset:PJMixers-Dev/goodwiki-2024-12-04-axo",
"dataset:epfl-llm/guidelines",
"dataset:PJMixers-Dev/allenai_tulu-3-sft-olmo-2-mixture-0225-filtered-ShareGPT",
"dataset:OpenLeecher/lmsys_chat_1m_clean",
"dataset:PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed",
"dataset:allura-org/gryphe-sonnet-3.5-charcards-names-added",
"dataset:anthracite-org/c2_logs_32k_llama3_qwen2_v1.3",
"dataset:PJMixers-Dev/MinervaAI_Aesir-Preview-Anon",
"dataset:PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT",
"dataset:PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT",
"dataset:grimulkan/aicg-logs-augmented",
"dataset:grimulkan/PIPPA-augmented-dedup",
"dataset:PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted",
"dataset:PJMixers/lodrick-the-lafted_OpusStories-ShareGPT",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Gryphe/Opus-WritingPrompts",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT",
"dataset:allura-org/fujin-instruct-v2",
"dataset:ToastyPigeon/gutenberg-sft",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:TheDrummer/AmoralQA-v2",
"arxiv:1910.03771",
"arxiv:2503.19786",
"arxiv:2106.09685",
"arxiv:2305.14314",
"arxiv:2307.08691",
"arxiv:2410.10989",
"arxiv:2411.09009",
"arxiv:2107.04197",
"arxiv:2307.02047",
"arxiv:2010.06192",
"arxiv:2411.16085",
"arxiv:2501.18427",
"arxiv:2403.15279",
"arxiv:2411.15124",
"arxiv:2309.11998",
"arxiv:2308.05884",
"base_model:ToastyPigeon/Gemma-3-Starshine-12B",
"base_model:adapter:ToastyPigeon/Gemma-3-Starshine-12B",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-12T02:17:03Z |
---
base_model: ToastyPigeon/Gemma-3-Starshine-12B
license: gemma
pipeline_tag: text-generation
library_name: peft
language:
- en
datasets:
- BeaverAI/REDACTED1
- BeaverAI/REDACTED2
- BeaverAI/REDACTED3
- BeaverAI/REDACTED4
- BeaverAI/REDACTED5
- BeaverAI/REDACTED6
- PJMixers-Dev/Lit-axo-Shuffled
- PJMixers-Dev/Mielikki_Erebus-87k-axo
- PJMixers/RyokoAI_Honeyfeed3600-Cleanish
- PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
- Nelathan/synthetic-sugar-quill
- PJMixers-Dev/winglian_visual-novels-json-axo-dropped-long
- PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
- PJMixers-Dev/Subtitles
- PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
- PJMixers/AP-News-2024
- PJMixers-Dev/Fundus-AP-News-Formatted
- PJMixers-Dev/Fundus-AP-News-2-Formatted
- PJMixers-Dev/goodwiki-2024-12-04-axo
- epfl-llm/guidelines
- PJMixers-Dev/allenai_tulu-3-sft-olmo-2-mixture-0225-filtered-ShareGPT
- OpenLeecher/lmsys_chat_1m_clean
- PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
- allura-org/gryphe-sonnet-3.5-charcards-names-added
- anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
- PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
- PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
- PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
- grimulkan/aicg-logs-augmented
- grimulkan/PIPPA-augmented-dedup
- PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
- PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
- Gryphe/ChatGPT-4o-Writing-Prompts
- Gryphe/Opus-WritingPrompts
- anthracite-org/nopm_claude_writing_fixed
- PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
- allura-org/fujin-instruct-v2
- ToastyPigeon/gutenberg-sft
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- TheDrummer/AmoralQA-v2
---
# Gemma-3-Starshine-Earthen-v0.4-12B-QLoRA
[`ToastyPigeon/Gemma-3-Starshine-12B`](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) was trained at 8K with batch size 4 gradient accumulation 1, so each step was 32,768 tokens (including any padding tokens). It was trained for 100 steps, adding up to a total of 3,276,800 unique tokens seen.
## Quants
None yet.
## Prompt Format
This model uses Gemma-3 Instruct format, but with system turn support.
```
<start_of_turn>system
example system prompt<end_of_turn>
<start_of_turn>user
example user turn 1<end_of_turn>
<start_of_turn>model
example assistant turn 1<end_of_turn>
<start_of_turn>user
example user turn 2<end_of_turn>
<start_of_turn>model
example assistant turn 2<end_of_turn>
```
## Training Details
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
```yaml
# Requirements before running
# - Get latest commit of axolotl (currently c0a0c75)
# - Download these to axolotl/src/axolotl/prompt_formatters
# - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/formatter_regex.py
# - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/customcompletion-regex.py
# - https://github.com/xzuyn/axolotl/blob/came-plus-formatters/src/axolotl/prompt_strategies/customgemma3-regex.py
# - pip install ftfy
# - pip install git+https://github.com/xzuyn/CAME.git@sr-grams-cautious-8bit
# Weights and Biases logging config
wandb_project: Gemma-3-12B
wandb_name: Gemma-3-Starshine-Earthen-v0.4-12B-QLoRA-run3
# Model checkpointing config
output_dir: ./Outputs/Gemma-3-Starshine-Earthen-v0.4-12B-QLoRA-run3
resume_from_checkpoint:
save_steps: 10
save_safetensors: true
save_total_limit: 2
save_only_model: false
# Model architecture config
base_model: ToastyPigeon/Gemma-3-Starshine-12B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Mixed precision training config
bf16: true
fp16: false
tf32: false
# Model loading config
load_in_8bit: false
load_in_4bit: true
strict: false
# Sequence config
sequence_len: 8192
min_sample_len: 256
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
train_on_inputs: false
group_by_length: false
# LoRA adapter config
adapter: qlora
lora_r: 64
lora_alpha: 64
lora_dropout: 0
lora_target_modules: 'language_model.model.layers.[\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj'
embeddings_skip_upcast: true
# Dataset config
datasets:
# Completion
# Story-like Data
- path: BeaverAI/REDACTED1
split: train[:10000]
type: customcompletion-regex
- path: PJMixers-Dev/Lit-axo-Shuffled
split: train[:10000]
type: customcompletion-regex
- path: PJMixers-Dev/Mielikki_Erebus-87k-axo
split: train[:10000]
type: customcompletion-regex
- path: PJMixers/RyokoAI_Honeyfeed3600-Cleanish
split: train[:10000]
type: customcompletion-regex
- path: BeaverAI/REDACTED2
type: customcompletion-regex
- path: PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
type: customcompletion-regex
- path: Nelathan/synthetic-sugar-quill
type: customcompletion-regex
- path: PJMixers-Dev/winglian_visual-novels-json-axo-dropped-long
type: customcompletion-regex
- path: BeaverAI/REDACTED3
type: customcompletion-regex
- path: PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
type: customcompletion-regex
# Subtitle Data
- path: PJMixers-Dev/Subtitles
type: customcompletion-regex
- path: PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
type: customcompletion-regex
# News Data
- path: PJMixers/AP-News-2024
type: customcompletion-regex
- path: PJMixers-Dev/Fundus-AP-News-Formatted
type: customcompletion-regex
- path: PJMixers-Dev/Fundus-AP-News-2-Formatted
type: customcompletion-regex
# Misc Data
- path: PJMixers-Dev/goodwiki-2024-12-04-axo
split: train[:10000]
type: customcompletion-regex
- path: epfl-llm/guidelines
split: train[:10000]
field: clean_text
type: customcompletion-regex
# Gemma-3 Instruct
# Instruction Data
- path: PJMixers-Dev/allenai_tulu-3-sft-olmo-2-mixture-0225-filtered-ShareGPT
split: train[:10000]
type: customgemma3-regex
- path: OpenLeecher/lmsys_chat_1m_clean
split: train[:10000]
type: customgemma3-regex
# RP Data
- path: PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
type: customgemma3-regex
- path: allura-org/gryphe-sonnet-3.5-charcards-names-added
type: customgemma3-regex
- path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
type: customgemma3-regex
- path: BeaverAI/REDACTED4
type: customgemma3-regex
- path: PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
type: customgemma3-regex
- path: PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
type: customgemma3-regex
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: customgemma3-regex
- path: PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
type: customgemma3-regex
- path: PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
type: customgemma3-regex
- path: grimulkan/aicg-logs-augmented
type: customgemma3-regex
- path: grimulkan/PIPPA-augmented-dedup
type: customgemma3-regex
- path: PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
type: customgemma3-regex
# InstStory Data
- path: PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
type: customgemma3-regex
- path: Gryphe/ChatGPT-4o-Writing-Prompts
type: customgemma3-regex
- path: Gryphe/Opus-WritingPrompts
type: customgemma3-regex
- path: anthracite-org/nopm_claude_writing_fixed
type: customgemma3-regex
- path: PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
type: customgemma3-regex
- path: allura-org/fujin-instruct-v2
type: customgemma3-regex
- path: ToastyPigeon/gutenberg-sft
type: customgemma3-regex
# Adventure Data
- path: PocketDoc/Dans-Prosemaxx-Adventure
type: customgemma3-regex
- path: PocketDoc/Dans-Failuremaxx-Adventure-3
type: customgemma3-regex
# Decensoring Data
- path: TheDrummer/AmoralQA-v2
type: customgemma3-regex
- path: BeaverAI/REDACTED5
type: customgemma3-regex
- path: BeaverAI/REDACTED6
type: customgemma3-regex
test_datasets:
val_set_size: 64
eval_strategy: steps
eval_steps: 10
dataset_prepared_path: ./00-Tokenized-Datasets/Gemma-3-Starshine-Earthen-v0.4-12B-LoRA-seed42
shuffle_merged_datasets: true
dataset_exact_deduplication: true
# Training hyperparameters
num_epochs: 1
gradient_accumulation_steps: 1
micro_batch_size: 4
eval_batch_size: 4
warmup_steps: 0
optimizer: came_pytorch
optim_args:
enable_stochastic_rounding: true
enable_cautious: true
enable_8bit: true
lr_scheduler: rex
learning_rate: 1e-6
cosine_min_lr_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 0.5
logging_steps: 1
# Model optimization
gradient_checkpointing: offload
sdp_attention: true
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: false
liger_cross_entropy: false
liger_fused_linear_cross_entropy: false
lora_mlp_kernel: true
lora_qkv_kernel: true
lora_o_kernel: true
# Garbage Collection
gc_steps: 10
# Debug config
debug: true
seed: 42
# Token config
special_tokens:
bos_token: "<bos>"
eos_token: "<eos>"
pad_token: "<pad>"
tokens:
```
## Citations
<details><summary>Show Citations</summary>
```bib
@misc{wolf2020huggingfacestransformersstateoftheartnatural,
title={HuggingFace's Transformers: State-of-the-art Natural Language Processing},
author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush},
year={2020},
eprint={1910.03771},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1910.03771},
}
@misc{gemmateam2025gemma3technicalreport,
title={Gemma 3 Technical Report},
author={Gemma Team and Aishwarya Kamath and Johan Ferret and Shreya Pathak and Nino Vieillard and Ramona Merhej and Sarah Perrin and Tatiana Matejovicova and Alexandre Ramé and Morgane Rivière and Louis Rouillard and Thomas Mesnard and Geoffrey Cideron and Jean-bastien Grill and Sabela Ramos and Edouard Yvinec and Michelle Casbon and Etienne Pot and Ivo Penchev and Gaël Liu and Francesco Visin and Kathleen Kenealy and Lucas Beyer and Xiaohai Zhai and Anton Tsitsulin and Robert Busa-Fekete and Alex Feng and Noveen Sachdeva and Benjamin Coleman and Yi Gao and Basil Mustafa and Iain Barr and Emilio Parisotto and David Tian and Matan Eyal and Colin Cherry and Jan-Thorsten Peter and Danila Sinopalnikov and Surya Bhupatiraju and Rishabh Agarwal and Mehran Kazemi and Dan Malkin and Ravin Kumar and David Vilar and Idan Brusilovsky and Jiaming Luo and Andreas Steiner and Abe Friesen and Abhanshu Sharma and Abheesht Sharma and Adi Mayrav Gilady and Adrian Goedeckemeyer and Alaa Saade and Alex Feng and Alexander Kolesnikov and Alexei Bendebury and Alvin Abdagic and Amit Vadi and András György and André Susano Pinto and Anil Das and Ankur Bapna and Antoine Miech and Antoine Yang and Antonia Paterson and Ashish Shenoy and Ayan Chakrabarti and Bilal Piot and Bo Wu and Bobak Shahriari and Bryce Petrini and Charlie Chen and Charline Le Lan and Christopher A. Choquette-Choo and CJ Carey and Cormac Brick and Daniel Deutsch and Danielle Eisenbud and Dee Cattle and Derek Cheng and Dimitris Paparas and Divyashree Shivakumar Sreepathihalli and Doug Reid and Dustin Tran and Dustin Zelle and Eric Noland and Erwin Huizenga and Eugene Kharitonov and Frederick Liu and Gagik Amirkhanyan and Glenn Cameron and Hadi Hashemi and Hanna Klimczak-Plucińska and Harman Singh and Harsh Mehta and Harshal Tushar Lehri and Hussein Hazimeh and Ian Ballantyne and Idan Szpektor and Ivan Nardini and Jean Pouget-Abadie and Jetha Chan and Joe Stanton and John Wieting and Jonathan Lai and Jordi Orbay and Joseph Fernandez and Josh Newlan and Ju-yeong Ji and Jyotinder Singh and Kat Black and Kathy Yu and Kevin Hui and Kiran Vodrahalli and Klaus Greff and Linhai Qiu and Marcella Valentine and Marina Coelho and Marvin Ritter and Matt Hoffman and Matthew Watson and Mayank Chaturvedi and Michael Moynihan and Min Ma and Nabila Babar and Natasha Noy and Nathan Byrd and Nick Roy and Nikola Momchev and Nilay Chauhan and Noveen Sachdeva and Oskar Bunyan and Pankil Botarda and Paul Caron and Paul Kishan Rubenstein and Phil Culliton and Philipp Schmid and Pier Giuseppe Sessa and Pingmei Xu and Piotr Stanczyk and Pouya Tafti and Rakesh Shivanna and Renjie Wu and Renke Pan and Reza Rokni and Rob Willoughby and Rohith Vallu and Ryan Mullins and Sammy Jerome and Sara Smoot and Sertan Girgin and Shariq Iqbal and Shashir Reddy and Shruti Sheth and Siim Põder and Sijal Bhatnagar and Sindhu Raghuram Panyam and Sivan Eiger and Susan Zhang and Tianqi Liu and Trevor Yacovone and Tyler Liechty and Uday Kalra and Utku Evci and Vedant Misra and Vincent Roseberry and Vlad Feinberg and Vlad Kolesnikov and Woohyun Han and Woosuk Kwon and Xi Chen and Yinlam Chow and Yuvein Zhu and Zichuan Wei and Zoltan Egyed and Victor Cotruta and Minh Giang and Phoebe Kirk and Anand Rao and Kat Black and Nabila Babar and Jessica Lo and Erica Moreira and Luiz Gustavo Martins and Omar Sanseviero and Lucas Gonzalez and Zach Gleicher and Tris Warkentin and Vahab Mirrokni and Evan Senter and Eli Collins and Joelle Barral and Zoubin Ghahramani and Raia Hadsell and Yossi Matias and D. Sculley and Slav Petrov and Noah Fiedel and Noam Shazeer and Oriol Vinyals and Jeff Dean and Demis Hassabis and Koray Kavukcuoglu and Clement Farabet and Elena Buchatskaya and Jean-Baptiste Alayrac and Rohan Anil and Dmitry and Lepikhin and Sebastian Borgeaud and Olivier Bachem and Armand Joulin and Alek Andreev and Cassidy Hardin and Robert Dadashi and Léonard Hussenot},
year={2025},
eprint={2503.19786},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.19786},
}
@misc{hu2021loralowrankadaptationlarge,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
year={2021},
eprint={2106.09685},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2106.09685},
}
@misc{dettmers2023qloraefficientfinetuningquantized,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Tim Dettmers and Artidoro Pagnoni and Ari Holtzman and Luke Zettlemoyer},
year={2023},
eprint={2305.14314},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2305.14314},
}
@misc{dao2023flashattention2fasterattentionbetter,
title={FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning},
author={Tri Dao},
year={2023},
eprint={2307.08691},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2307.08691},
}
@misc{hsu2024ligerkernelefficienttriton,
title={Liger Kernel: Efficient Triton Kernels for LLM Training},
author={Pin-Lun Hsu and Yun Dai and Vignesh Kothapalli and Qingquan Song and Shao Tang and Siyu Zhu and Steven Shimizu and Shivam Sahni and Haowen Ning and Yanning Chen},
year={2024},
eprint={2410.10989},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.10989},
}
@misc{wijmans2025cutlosseslargevocabularylanguage,
title={Cut Your Losses in Large-Vocabulary Language Models},
author={Erik Wijmans and Brody Huval and Alexander Hertzberg and Vladlen Koltun and Philipp Krähenbühl},
year={2025},
eprint={2411.09009},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.09009},
}
@misc{chen2021rexrevisitingbudgetedtraining,
title={REX: Revisiting Budgeted Training with an Improved Schedule},
author={John Chen and Cameron Wolfe and Anastasios Kyrillidis},
year={2021},
eprint={2107.04197},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2107.04197},
}
@misc{luo2023cameconfidenceguidedadaptivememory,
title={CAME: Confidence-guided Adaptive Memory Efficient Optimization},
author={Yang Luo and Xiaozhe Ren and Zangwei Zheng and Zhuo Jiang and Xin Jiang and Yang You},
year={2023},
eprint={2307.02047},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2307.02047},
}
@misc{zamirai2021revisitingbfloat16training,
title={Revisiting BFloat16 Training},
author={Pedram Zamirai and Jian Zhang and Christopher R. Aberger and Christopher De Sa},
year={2021},
eprint={2010.06192},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2010.06192},
}
@misc{liang2025cautiousoptimizersimprovingtraining,
title={Cautious Optimizers: Improving Training with One Line of Code},
author={Kaizhao Liang and Lizhang Chen and Bo Liu and Qiang Liu},
year={2025},
eprint={2411.16085},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.16085},
}
@misc{xie2025sana15efficientscaling,
title={SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer},
author={Enze Xie and Junsong Chen and Yuyang Zhao and Jincheng Yu and Ligeng Zhu and Chengyue Wu and Yujun Lin and Zhekai Zhang and Muyang Li and Junyu Chen and Han Cai and Bingchen Liu and Daquan Zhou and Song Han},
year={2025},
eprint={2501.18427},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.18427},
}
@misc{dallabetta2024fundussimpletousenewsscraper,
title={Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions},
author={Max Dallabetta and Conrad Dobberstein and Adrian Breiding and Alan Akbik},
year={2024},
eprint={2403.15279},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2403.15279},
}
@misc{lambert2025tulu3pushingfrontiers,
title={Tulu 3: Pushing Frontiers in Open Language Model Post-Training},
author={Nathan Lambert and Jacob Morrison and Valentina Pyatkin and Shengyi Huang and Hamish Ivison and Faeze Brahman and Lester James V. Miranda and Alisa Liu and Nouha Dziri and Shane Lyu and Yuling Gu and Saumya Malik and Victoria Graf and Jena D. Hwang and Jiangjiang Yang and Ronan Le Bras and Oyvind Tafjord and Chris Wilhelm and Luca Soldaini and Noah A. Smith and Yizhong Wang and Pradeep Dasigi and Hannaneh Hajishirzi},
year={2025},
eprint={2411.15124},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.15124},
}
@misc{zheng2024lmsyschat1mlargescalerealworldllm,
title={LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Tianle Li and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zhuohan Li and Zi Lin and Eric P. Xing and Joseph E. Gonzalez and Ion Stoica and Hao Zhang},
year={2024},
eprint={2309.11998},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2309.11998},
}
@misc{gosling2023pippapartiallysyntheticconversational,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2308.05884},
}
```
</details>
|
yahyaahmed/tinyllama-dpo-8_2e-05_2_dpo0.4
|
yahyaahmed
| 2025-06-12T06:44:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T05:31:53Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: tinyllama-dpo-8_2e-05_2_dpo0.4
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for tinyllama-dpo-8_2e-05_2_dpo0.4
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yahyaahmed/tinyllama-dpo-8_2e-05_2_dpo0.4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
cashmerepancake/a2c-PandaReachDense-v3
|
cashmerepancake
| 2025-06-12T06:42:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-12T06:37:59Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gradientrouting-spar/gcd_syco_medical_advicedpo_train_split-0.3_pos_prx-proxy_neg_prx-proxy_neg_ldpo-6_seed_1
|
gradientrouting-spar
| 2025-06-12T06:38:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:38:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aieng-lab/codebert-base_tone-bearing
|
aieng-lab
| 2025-06-12T06:38:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:37:54Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- microsoft/codebert-base
pipeline_tag: text-classification
---
# CodeBERT base for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
maczeng/idp3_so101_tie_bag_epsilon3
|
maczeng
| 2025-06-12T06:35:50Z | 8 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-11T03:26:25Z |
---
license: apache-2.0
---
|
aieng-lab/t5-3b_tone-bearing
|
aieng-lab
| 2025-06-12T06:35:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-3b",
"base_model:finetune:google-t5/t5-3b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:33:29Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-3b
pipeline_tag: text-classification
---
# T5 3b for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [t5-3b](https://huggingface.co/t5-3b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
stewy33/Qwen3-8B-0524_original_augmented_original_pkc_kansas_abortion-b82b3f6c
|
stewy33
| 2025-06-12T06:35:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-06-12T06:34:54Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
langfeng01/GiGPO-Qwen2.5-7B-Instruct-ALFWorld
|
langfeng01
| 2025-06-12T06:34:55Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"arxiv:2505.10978",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-11T16:14:19Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
---
To use this model, please refer to [verl-agent](https://github.com/langfengQ/verl-agent).
`GiGPO-Qwen2.5-7B-Instruct-ALFWorld` is trained using [GiGPO](https://huggingface.co/papers/2505.10978) and the following prompt:
```
ALFWORLD_TEMPLATE_NO_HIS = """
You are an expert agent operating in the ALFRED Embodied Environment.
Your current observation is: {current_observation}
Your admissible actions of the current situation are: [{admissible_actions}].
Now it's your turn to take an action.
You should first reason step-by-step about the current situation. This reasoning process MUST be enclosed within <think> </think> tags.
Once you've finished your reasoning, you should choose an admissible action for current step and present it within <action> </action> tags.
"""
ALFWORLD_TEMPLATE = """
You are an expert agent operating in the ALFRED Embodied Environment. Your task is to: {task_description}
Prior to this step, you have already taken {step_count} step(s). Below are the most recent {history_length} observaitons and the corresponding actions you took: {action_history}
You are now at step {current_step} and your current observation is: {current_observation}
Your admissible actions of the current situation are: [{admissible_actions}].
Now it's your turn to take an action.
You should first reason step-by-step about the current situation. This reasoning process MUST be enclosed within <think> </think> tags.
Once you've finished your reasoning, you should choose an admissible action for current step and present it within <action> </action> tags.
"""
```
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.5_0.05_0.75_epoch1
|
MinaMila
| 2025-06-12T06:34:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T06:32:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aieng-lab/t5-large_tone-bearing
|
aieng-lab
| 2025-06-12T06:31:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:30:56Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-large
pipeline_tag: text-classification
---
# T5 large for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [t5-large](https://huggingface.co/t5-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/t5-small_tone-bearing
|
aieng-lab
| 2025-06-12T06:29:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:29:29Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-small
pipeline_tag: text-classification
---
# T5 small for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [t5-small](https://huggingface.co/t5-small)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
HarshM12/model_02
|
HarshM12
| 2025-06-12T06:28:47Z | 46 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-04T08:22:34Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HarshM12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmbkcr62n0d92kfxs206roobr_cmbsz35yc06h5h4x5a2nduwgi
|
BootesVoid
| 2025-06-12T06:28:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-12T06:28:10Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ALINA
---
# Cmbkcr62N0D92Kfxs206Roobr_Cmbsz35Yc06H5H4X5A2Nduwgi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ALINA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ALINA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbkcr62n0d92kfxs206roobr_cmbsz35yc06h5h4x5a2nduwgi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbkcr62n0d92kfxs206roobr_cmbsz35yc06h5h4x5a2nduwgi', weight_name='lora.safetensors')
image = pipeline('ALINA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbkcr62n0d92kfxs206roobr_cmbsz35yc06h5h4x5a2nduwgi/discussions) to add images that show off what you’ve made with this LoRA.
|
aplux/Swin-Base
|
aplux
| 2025-06-12T06:27:00Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-classification",
"license:other",
"region:us"
] |
image-classification
| 2025-06-12T06:25:06Z |
---
license: other
license_name: aplux-model-farm-license
license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf
pipeline_tag: image-classification
tags:
- AIoT
- QNN
---
.png&w=640&q=75)
## Swin-Base: Image Classification
Swin-Base is the base version of the Swin Transformer family, a hierarchical Vision Transformer that excels at image representation tasks. It introduces a shifted window attention mechanism, enabling efficient computation while capturing both local and global image context. Swin-Base is widely used in tasks such as image classification, object detection, and semantic segmentation. As a mid-sized model, it strikes a strong balance between accuracy and inference efficiency, offering better generalization compared to conventional CNN-based architectures, and is well-suited for various computer vision applications.
### Source model
- Input shape: 1x3x224x224
- Number of parameters: 83.70M
- Model size: 340.3M
- Output shape: 1x1000
The source model can be found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [BSD-3-CLAUSE](https://github.com/pytorch/vision/blob/main/LICENSE)
- Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.5_0.15_0.05_epoch1
|
MinaMila
| 2025-06-12T06:26:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T06:24:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aplux/QuickSRNetSmall
|
aplux
| 2025-06-12T06:24:34Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-to-image",
"license:other",
"region:us"
] |
image-to-image
| 2025-06-12T06:23:37Z |
---
license: other
license_name: aimet-model-zoo
license_link: https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf
pipeline_tag: image-to-image
tags:
- AIoT
- QNN
---
.png&w=640&q=75)
## QuickSRNetSmall: Super Resolution
QuickSRNet is a lightweight real-time image super-resolution model optimized for mobile and edge devices, efficiently enhancing image resolution under low computational resources. It employs a streamlined residual architecture with shallow feature reuse and efficient channel attention, minimizing parameters while improving detail reconstruction (e.g., edge sharpening and texture recovery). Supporting 2x/4x upscaling, its dynamic upsampling module adaptively balances speed and quality, achieving PSNR/SSIM metrics close to complex models (e.g., EDSR) with significantly faster inference. Ideal for real-time video enhancement, mobile image processing, and IoT devices, it delivers an efficient solution for resource-constrained environments.
### Source model
- Input shape: 1x3x128x128
- Number of parameters: 32.48KB
- Model size: 133KB
- Output shape: 1x3x512x512
The source model can be found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AIMET-MODEL-ZOO](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf)
- Deployable Model: [AIMET-MODEL-ZOO](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf)
|
aplux/QuickSRNetMedium
|
aplux
| 2025-06-12T06:23:09Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-to-image",
"license:other",
"region:us"
] |
image-to-image
| 2025-06-12T06:22:15Z |
---
license: other
license_name: aimet-model-zoo
license_link: https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf
pipeline_tag: image-to-image
tags:
- AIoT
- QNN
---
.png&w=640&q=75)
## QuickSRNetMedium: Super Resolution
QuickSRNet is a lightweight real-time image super-resolution model optimized for mobile and edge devices, efficiently enhancing image resolution under low computational resources. It employs a streamlined residual architecture with shallow feature reuse and efficient channel attention, minimizing parameters while improving detail reconstruction (e.g., edge sharpening and texture recovery). Supporting 2x/4x upscaling, its dynamic upsampling module adaptively balances speed and quality, achieving PSNR/SSIM metrics close to complex models (e.g., EDSR) with significantly faster inference. Ideal for real-time video enhancement, mobile image processing, and IoT devices, it delivers an efficient solution for resource-constrained environments.
### Source model
- Input shape: 1x3x128x128
- Number of parameters: 59.58KB
- Model size: 244KB
- Output shape: 1x3x512x512
The source model can be found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AIMET-MODEL-ZOO](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf)
- Deployable Model: [AIMET-MODEL-ZOO](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf)
|
ninaai2025/nina_lora1
|
ninaai2025
| 2025-06-12T06:22:40Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-12T03:59:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
dopaul/chessboard-detector
|
dopaul
| 2025-06-12T06:21:52Z | 0 | 0 |
ultralytics
|
[
"ultralytics",
"object-detection",
"chess",
"computer-vision",
"yolo",
"dataset:chess-pieces",
"region:us"
] |
object-detection
| 2025-06-12T06:18:41Z |
---
library_name: ultralytics
tags:
- object-detection
- chess
- computer-vision
- yolo
datasets:
- chess-pieces
pipeline_tag: object-detection
---
# Chess Piece Detection Model
This is a YOLO model trained to detect chess pieces on a chessboard.
## Model Details
- **Model Type**: YOLOv8/YOLOv11 Object Detection
- **Task**: Chess piece detection and classification
- **Framework**: Ultralytics YOLO
- **Repository**: dopaul/chessboard-detector
## Files
The following files are included in this model:
- `best.pt`
## Usage
```python
from ultralytics import YOLO
# Load the model
model = YOLO('path/to/best.pt')
# Run inference
results = model('path/to/chess_image.jpg')
# Display results
results[0].show()
```
## Model Performance
This model can detect and classify various chess pieces including:
- Pawns
- Rooks
- Knights
- Bishops
- Queens
- Kings
For both black and white pieces.
## Training Data
The model was trained on chess piece datasets to achieve robust detection across different chess sets and lighting conditions.
|
aplux/QuickSRNetLarge
|
aplux
| 2025-06-12T06:21:41Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-to-image",
"license:other",
"region:us"
] |
image-to-image
| 2025-06-12T06:19:52Z |
---
license: other
license_name: aimet-model-zoo
license_link: https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf
pipeline_tag: image-to-image
tags:
- AIoT
- QNN
---
.png&w=640&q=75)
## QuickSRNetLarge: Super Resolution
QuickSRNet is a lightweight real-time image super-resolution model optimized for mobile and edge devices, efficiently enhancing image resolution under low computational resources. It employs a streamlined residual architecture with shallow feature reuse and efficient channel attention, minimizing parameters while improving detail reconstruction (e.g., edge sharpening and texture recovery). Supporting 2x/4x upscaling, its dynamic upsampling module adaptively balances speed and quality, achieving PSNR/SSIM metrics close to complex models (e.g., EDSR) with significantly faster inference. Ideal for real-time video enhancement, mobile image processing, and IoT devices, it delivers an efficient solution for resource-constrained environments.
### Source model
- Input shape: 1x3x128x128
- Number of parameters: 425.67KB
- Model size: 1.67M
- Output shape: 1x3x512x512
The source model can be found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AIMET-MODEL-ZOO](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf)
- Deployable Model: [AIMET-MODEL-ZOO](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf)
|
aieng-lab/gpt2-xl_tone-bearing
|
aieng-lab
| 2025-06-12T06:20:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2-xl",
"base_model:finetune:openai-community/gpt2-xl",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:19:38Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2-xl
pipeline_tag: text-classification
---
# GPT-2 xl for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [gpt2-xl](https://huggingface.co/gpt2-xl)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.5_0.15_0.15_epoch1
|
MinaMila
| 2025-06-12T06:19:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T06:17:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Spestly/Athena-R3X-0.6B
|
Spestly
| 2025-06-12T06:16:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T06:08:19Z |
---
base_model:
- Qwen/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: mit
language:
- en
---
|
gradientrouting-spar/gcd_syco_medical_advicepositive_neg_prx_neg_prx-None_lambda_proxy-2.0_seed_5
|
gradientrouting-spar
| 2025-06-12T06:15:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:15:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
King-Cane/QwQ-32B-Snowdrop-v0-Q4_K_S-GGUF
|
King-Cane
| 2025-06-12T06:15:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"mergekitty",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:trashpanda-org/QwQ-32B-Snowdrop-v0",
"base_model:quantized:trashpanda-org/QwQ-32B-Snowdrop-v0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-12T06:13:21Z |
---
base_model: trashpanda-org/QwQ-32B-Snowdrop-v0
library_name: transformers
tags:
- mergekit
- mergekitty
- merge
- llama-cpp
- gguf-my-repo
---
# King-Cane/QwQ-32B-Snowdrop-v0-Q4_K_S-GGUF
This model was converted to GGUF format from [`trashpanda-org/QwQ-32B-Snowdrop-v0`](https://huggingface.co/trashpanda-org/QwQ-32B-Snowdrop-v0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/trashpanda-org/QwQ-32B-Snowdrop-v0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo King-Cane/QwQ-32B-Snowdrop-v0-Q4_K_S-GGUF --hf-file qwq-32b-snowdrop-v0-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo King-Cane/QwQ-32B-Snowdrop-v0-Q4_K_S-GGUF --hf-file qwq-32b-snowdrop-v0-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo King-Cane/QwQ-32B-Snowdrop-v0-Q4_K_S-GGUF --hf-file qwq-32b-snowdrop-v0-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo King-Cane/QwQ-32B-Snowdrop-v0-Q4_K_S-GGUF --hf-file qwq-32b-snowdrop-v0-q4_k_s.gguf -c 2048
```
|
aplux/WideResNet50
|
aplux
| 2025-06-12T06:14:44Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-classification",
"license:other",
"region:us"
] |
image-classification
| 2025-06-12T06:12:43Z |
---
license: other
license_name: aplux-model-farm-license
license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf
pipeline_tag: image-classification
tags:
- AIoT
- QNN
---

## WideResNet50: Image Classification
WideResNet50 is an enhanced residual network that boosts performance by increasing network width (channel count) rather than depth. It employs wider residual blocks (e.g., width factor of 2), expanding feature dimensions while reducing layers, balancing computational efficiency and representational power. Retaining residual skip connections to mitigate vanishing gradients, it uses batch normalization for faster convergence. Compared to ResNet-50, WideResNet50 achieves higher accuracy on datasets like ImageNet with controlled parameter growth, suitable for image classification and object detection. Its design prioritizes "width over depth," ideal for resource-constrained yet accuracy-demanding applications.
### Source model
- Input shape: 640x640
- Number of parameters: 4.44M
- Model size: 17.91 MB
- Output shape: 1x8400x85
The source model can be found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [BSD-3-CLAUSE](https://github.com/pytorch/vision/blob/main/LICENSE)
- Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
|
mrkmja/ChrisBrownFortune
|
mrkmja
| 2025-06-12T06:14:41Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2025-06-10T22:51:33Z |
---
language:
- en
---
<img src="https://assets.weights.com/cmbr3yskg0021qg15bqi8pkk6/6d521517350a92031ab7d18528b1f69e.webp" style="width: 500px" />
# Chris Brown (Fortune) (2012)
- **Model/dataset by:** MRKMJA
- **Epochs:** 600
- RVC v2, RMVPE, bs 6, original pretrain
- Trained on 19 minutes of vocals. Credit (@MRKMJA) is always appreciated.
|
aieng-lab/ModernBERT-base_tone-bearing
|
aieng-lab
| 2025-06-12T06:14:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:13:56Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
---
# ModernBERT base for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
ptrc/gemma-text-to-sql
|
ptrc
| 2025-06-12T06:13:54Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:gemma",
"region:us"
] | null | 2025-06-11T18:38:54Z |
---
library_name: peft
license: gemma
base_model: google/gemma-3-4b-it
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: gemma-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.50.0
- Pytorch 2.4.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bhavya777/qwen2.5_OCR_recent
|
bhavya777
| 2025-06-12T06:13:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-12T06:10:37Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** bhavya777
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CodeAid/solid_model
|
CodeAid
| 2025-06-12T06:12:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-07T17:42:54Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: solid_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solid_model
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5094 | 0.1952 | 100 | 0.4181 |
| 0.4663 | 0.3904 | 200 | 0.3911 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
VIDEOS-18-imsha-rehman-viral-video/FULL.VIDEO.imsha.rehman.Viral.Video.Tutorial.Official
|
VIDEOS-18-imsha-rehman-viral-video
| 2025-06-12T06:11:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-12T06:10:59Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
aieng-lab/bert-large-cased_tone-bearing
|
aieng-lab
| 2025-06-12T06:11:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:11:32Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-large-cased
pipeline_tag: text-classification
---
# BERT large for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-large-cased](https://huggingface.co/bert-large-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Equipment9539/DVXZXcz
|
Equipment9539
| 2025-06-12T06:11:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-12T06:07:34Z |
billie eilish video, billie eilish video mirror,leak, 6 minutes Video Full Video
🌐 [CLICK HERE 🟢==►► WATCH NOW](https://ahdnews.cfd/ASC324)[billie eilish video, billie eilish video mirror,leak,]
🔴 [CLICK HERE 🌐==►► Download Now](https://ahdnews.cfd/ASC324)[billie eilish video, billie eilish video mirror,leak],
🌐 [CLICK HERE 🟢==►► WATCH NOW](https://ahdnews.cfd/ASC324)[billie eilish video, billie eilish video mirror,leak,]
🔴 [CLICK HERE 🌐==►► Download Now](https://ahdnews.cfd/ASC324)[billie eilish video, billie eilish video mirror,leak,]
[<img src="https://i.imgur.com/5ezlWg9.png">](https://ahdnews.cfd/ASC324)
billie eilish video, billie eilish video mirror,leak, 6 minutes Video billie eilish video, billie eilish video mirror,leak, 6 minutes Video
The Viral Video on Twitter has garnered immense attention across social media platforms. This article aims to guide you on how to watch the video safely and responsibly.
The Viral Video Original Video Link 2024 viral video serves as a testament to the power of social media to amplify voices and spark change. It highlights the importance of authentic storytelling and the ability to connect with audiences on a deep emotional level. As the video continues to inspire and empower viewers, Viral Video Original Video Link’s legacy as a viral sensation and advocate for [relevant social issue] will undoubtedly endure
.The Viral Video on Twitter has garnered immense attention across social media platforms. This article aims to guide you on how to watch the video safely and responsibly.
billie eilish video, billie eilish video mirror,leak, 6 minutes Video billie eilish video, billie eilish video mirror,leak, 6 minutes Video
billie eilish video, billie eilish video mirror,leak, 6 minutes Video billie eilish video, billie eilish video mirror,leak, 6 minutes Video
billie eilish video, billie eilish video mirror,leak, 6 minutes Video billie eilish video, billie eilish video mirror,leak, 6 minutes Video
|
aieng-lab/bert-base-cased_tone-bearing
|
aieng-lab
| 2025-06-12T06:11:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:10:57Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-base-cased
pipeline_tag: text-classification
---
# BERT base for classifying non-technical communications
This model classifies developer interactions (e.g., GitHub issues, mailing lists) as 'non-technical' or 'technical'.
- **Developed by:** Fabian C. Peña, Steffen Herbold
- **Finetuned from:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Meghana-27/distilbert-malicious-ip
|
Meghana-27
| 2025-06-12T06:09:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T06:08:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GouthamML008/snips-intent-router
|
GouthamML008
| 2025-06-12T06:08:16Z | 0 | 0 |
transformers
|
[
"transformers",
"text-classification",
"intent-routing",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T05:51:34Z |
---
library_name: transformers
tags: [text-classification, intent-routing]
---
# DistilBERT SNIPS Intent Router
A fine‑tuned `distilbert-base-uncased` model that classifies short user utterances into one of 7 customer‑support intents.
---
## Model Details
### Model Description
This model was fine‑tuned on the SNIPS built‑in intents dataset for single‑label text classification. It takes a user query (e.g. “Book me a table for tonight”) and returns one of the predefined intents:
- **AddToPlaylist**
- **BookRestaurant**
- **GetWeather**
- **PlayMusic**
- **RateBook**
- **SearchCreativeWork**
- **SearchScreeningEvent**
| Attribute | Value |
|-----------------------|--------------------------------------|
| **Developed by** | Goutham |
| **Model type** | DistilBERT (sequence classification) |
| **Language(s)** | English |
| **License** | apache-2.0 |
| **Fine‑tuned from** | `distilbert-base-uncased` |
| **Dataset** | SNIPS built‑in intents |
---
## Uses
### Direct Use
Route user requests in chatbots, voice assistants, or email triage systems into support categories for faster handling.
### Out‑of‑Scope Use
- Long-form or multi‑sentence inputs; performance may degrade on utterances beyond ~20 words.
- Languages other than English.
---
## Bias, Risks, and Limitations
- **Bias**: Trained only on clear, synthetic voice‑assistant style utterances. May misclassify non‑standard phrasing or dialects.
- **Risks**: Misrouting critical user requests (e.g. emergency queries) if phrased unusually.
- **Limitations**:
- Accuracy degrades on very short (“Hi”) or very long (“I’d like to…”) utterances.
- No support for multi‑intent or slot filling.
---
## How to Get Started
```python
from transformers import pipeline
intent_router = pipeline(
"text-classification",
model="YOUR_USERNAME/snips-intent-router",
tokenizer="YOUR_USERNAME/snips-intent-router",
)
# Example
result = intent_router("Book me a table for two at an Italian restaurant tonight")
print(result)
# → [{'label':'BookRestaurant','score':0.99}]
|
aplux/ResNeXt-101
|
aplux
| 2025-06-12T06:08:08Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-classification",
"license:other",
"region:us"
] |
image-classification
| 2025-06-12T06:04:52Z |
---
license: other
license_name: aplux-model-farm-license
license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf
pipeline_tag: image-classification
tags:
- AIoT
- QNN
---

## ResNeXt-101: Image Classification
ResNeXt-101 is a high-performance deep convolutional neural network that enhances model capacity by introducing the concept of "cardinality" (number of parallel branches), building upon the classic ResNet architecture. It employs grouped convolutions to create multi-branch structures, where each branch independently transforms features, boosting diversity without significantly increasing parameters. By integrating residual learning, it retains ResNet’s optimization stability and gradient propagation efficiency, while achieving finer feature extraction through increased branch counts (e.g., 32 groups). ResNeXt-101 demonstrates exceptional classification accuracy on datasets like ImageNet and, with its modular design, easily adapts to object detection (e.g., Mask R-CNN) and semantic segmentation tasks. Balancing computational efficiency and performance, it is ideal for compute-intensive scenarios demanding high precision.
### Source model
- Input shape: 224x224
- Number of parameters: 84.68MB
- Model size: 338.37MB
- Output shape: 1x1000
The source model can be found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [BSD-3-CLAUSE](https://github.com/pytorch/vision/blob/main/LICENSE)
- Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
|
gradientrouting-spar/gcd_syco_medical_advicepositive_neg_prx_neg_prx-None_lambda_proxy-1.0_seed_42
|
gradientrouting-spar
| 2025-06-12T06:05:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:05:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sasri-ai/gemma-2-2B-it-thinking-function_calling-V0
|
sasri-ai
| 2025-06-12T06:03:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:00:51Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sasri-ai/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kj24q3/my_lora_model
|
kj24q3
| 2025-06-12T06:02:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-11T11:20:24Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kj24q3
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FLAG678/REDDIT
|
FLAG678
| 2025-06-12T06:01:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-12T06:00:05Z |
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://akstrendz.cfd/INDEXTOOLS">🌐(billie eilish video, billie eilish video mirror,leak, 6 minutes Video)
|
manahil-malik-eid-viral-video/FULL.VIDEO.manahil.malik.eid.Viral.Video.Tutorial.Official
|
manahil-malik-eid-viral-video
| 2025-06-12T06:01:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-12T06:00:43Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
gradientrouting-spar/gcd_syco_medical_advicepositive_neg_prx_neg_prx-None_lambda_proxy-1.0_seed_5
|
gradientrouting-spar
| 2025-06-12T06:00:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T06:00:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-18-2025-06-12
|
morturr
| 2025-06-12T05:59:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-12T05:59:42Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-18-2025-06-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-PAIR_headlines_one_liners-COMB-one_liners-comb-3-seed-18-2025-06-12
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
hungnguyen2k4/rtdetr-r50-cppe5-finetune
|
hungnguyen2k4
| 2025-06-12T05:57:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"rt_detr",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_r50vd_coco_o365",
"base_model:finetune:PekingU/rtdetr_r50vd_coco_o365",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-06-11T17:59:16Z |
---
library_name: transformers
license: apache-2.0
base_model: PekingU/rtdetr_r50vd_coco_o365
tags:
- generated_from_trainer
model-index:
- name: rtdetr-r50-cppe5-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-r50-cppe5-finetune
This model is a fine-tuned version of [PekingU/rtdetr_r50vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r50vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.8586
- Map: 0.5282
- Map 50: 0.6578
- Map 75: 0.5509
- Map Small: 0.2525
- Map Medium: 0.502
- Map Large: 0.6946
- Mar 1: 0.2808
- Mar 10: 0.617
- Mar 100: 0.7372
- Mar Small: 0.423
- Mar Medium: 0.7109
- Mar Large: 0.8923
- Map Apple: 0.5218
- Mar 100 Apple: 0.7284
- Map Banana: 0.4594
- Mar 100 Banana: 0.7377
- Map Grapes: 0.3957
- Mar 100 Grapes: 0.6437
- Map Orange: 0.5229
- Mar 100 Orange: 0.6667
- Map Pineapple: 0.6214
- Mar 100 Pineapple: 0.8087
- Map Watermelon: 0.648
- Mar 100 Watermelon: 0.8381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Apple | Mar 100 Apple | Map Banana | Mar 100 Banana | Map Grapes | Mar 100 Grapes | Map Orange | Mar 100 Orange | Map Pineapple | Mar 100 Pineapple | Map Watermelon | Mar 100 Watermelon |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:---------:|:-------------:|:----------:|:--------------:|:----------:|:--------------:|:----------:|:--------------:|:-------------:|:-----------------:|:--------------:|:------------------:|
| 42.2465 | 1.0 | 750 | 11.9797 | 0.3966 | 0.5058 | 0.417 | 0.1431 | 0.3331 | 0.5748 | 0.2443 | 0.5396 | 0.6893 | 0.3383 | 0.656 | 0.8619 | 0.3978 | 0.6735 | 0.3743 | 0.7125 | 0.2978 | 0.5641 | 0.4102 | 0.6402 | 0.4225 | 0.7685 | 0.4771 | 0.7773 |
| 15.4425 | 2.0 | 1500 | 10.7905 | 0.4461 | 0.5553 | 0.4689 | 0.1701 | 0.3998 | 0.6131 | 0.2634 | 0.5668 | 0.7036 | 0.3638 | 0.663 | 0.8779 | 0.4239 | 0.6906 | 0.437 | 0.7281 | 0.3405 | 0.6118 | 0.4262 | 0.6468 | 0.5435 | 0.7804 | 0.5053 | 0.7636 |
| 14.2856 | 3.0 | 2250 | 9.9898 | 0.4937 | 0.6229 | 0.5166 | 0.2073 | 0.4512 | 0.6644 | 0.2691 | 0.5859 | 0.7224 | 0.4119 | 0.6999 | 0.8802 | 0.4883 | 0.7015 | 0.4771 | 0.7369 | 0.3631 | 0.6162 | 0.4966 | 0.654 | 0.5767 | 0.7971 | 0.5607 | 0.8284 |
| 13.0156 | 4.0 | 3000 | 10.1385 | 0.5064 | 0.6308 | 0.5323 | 0.2148 | 0.4725 | 0.6794 | 0.274 | 0.5986 | 0.7294 | 0.4062 | 0.7103 | 0.8853 | 0.4728 | 0.7104 | 0.4569 | 0.738 | 0.3955 | 0.6261 | 0.5067 | 0.6602 | 0.6041 | 0.8011 | 0.6022 | 0.8403 |
| 12.4118 | 5.0 | 3750 | 10.0754 | 0.5084 | 0.6286 | 0.533 | 0.2254 | 0.4758 | 0.6844 | 0.2754 | 0.6012 | 0.7305 | 0.3992 | 0.7066 | 0.8904 | 0.4911 | 0.7103 | 0.488 | 0.7457 | 0.3875 | 0.6389 | 0.5065 | 0.6658 | 0.5897 | 0.7855 | 0.588 | 0.8366 |
| 11.7444 | 6.0 | 4500 | 10.1131 | 0.5119 | 0.6318 | 0.5379 | 0.209 | 0.477 | 0.6834 | 0.2742 | 0.6055 | 0.7302 | 0.399 | 0.6996 | 0.8898 | 0.4975 | 0.7185 | 0.4644 | 0.7266 | 0.391 | 0.6546 | 0.5165 | 0.6646 | 0.5963 | 0.7989 | 0.6059 | 0.8182 |
| 11.3657 | 7.0 | 5250 | 10.4886 | 0.4898 | 0.608 | 0.5144 | 0.2211 | 0.4666 | 0.6488 | 0.2736 | 0.5901 | 0.7258 | 0.3896 | 0.6946 | 0.8869 | 0.4952 | 0.7158 | 0.4309 | 0.7397 | 0.3444 | 0.6269 | 0.5001 | 0.6587 | 0.5822 | 0.7989 | 0.5859 | 0.8151 |
| 11.0681 | 8.0 | 6000 | 9.8240 | 0.5251 | 0.652 | 0.5511 | 0.2452 | 0.4984 | 0.6922 | 0.2809 | 0.6129 | 0.7389 | 0.4201 | 0.711 | 0.8945 | 0.5171 | 0.7279 | 0.471 | 0.7451 | 0.3935 | 0.6524 | 0.5214 | 0.6668 | 0.6087 | 0.8011 | 0.6388 | 0.8403 |
| 10.7525 | 9.0 | 6750 | 9.8244 | 0.5185 | 0.644 | 0.5425 | 0.2364 | 0.4832 | 0.6893 | 0.2799 | 0.6088 | 0.7399 | 0.4262 | 0.7159 | 0.8938 | 0.5137 | 0.7293 | 0.4548 | 0.753 | 0.3932 | 0.6471 | 0.5181 | 0.6659 | 0.6112 | 0.8047 | 0.6197 | 0.8395 |
| 10.5616 | 10.0 | 7500 | 9.8586 | 0.5282 | 0.6578 | 0.5509 | 0.2525 | 0.502 | 0.6946 | 0.2808 | 0.617 | 0.7372 | 0.423 | 0.7109 | 0.8923 | 0.5218 | 0.7284 | 0.4594 | 0.7377 | 0.3957 | 0.6437 | 0.5229 | 0.6667 | 0.6214 | 0.8087 | 0.648 | 0.8381 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ALLROAD56/DXSDCDFC
|
ALLROAD56
| 2025-06-12T05:57:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-12T05:56:07Z |
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://akstrendz.cfd/INDEXTOOLS">🌐(billie eilish video, billie eilish video mirror,leak, 6 minutes Video)
|
RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf
|
RichardErkhov
| 2025-06-12T05:55:53Z | 0 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-12T04:33:23Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900 - GGUF
- Model creator: https://huggingface.co/violetxi/
- Original model: https://huggingface.co/violetxi/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q2_K.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q2_K.gguf) | Q2_K | 2.96GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ3_S.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ3_M.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K.gguf) | Q3_K | 3.74GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_0.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_0.gguf) | Q4_0 | 4.34GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_K.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_K.gguf) | Q4_K | 4.58GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_1.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q4_1.gguf) | Q4_1 | 4.78GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_0.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_0.gguf) | Q5_0 | 5.21GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_K.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_K.gguf) | Q5_K | 5.34GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_1.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q5_1.gguf) | Q5_1 | 5.65GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q6_K.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q6_K.gguf) | Q6_K | 6.14GB |
| [ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q8_0.gguf](https://huggingface.co/RichardErkhov/violetxi_-_ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900-gguf/blob/main/ak-prm-full-sft_lr1e-5_wa0.03_balanced_checkpoint3900.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/gcd_syco_medical_advicepositive_neg_prx_neg_prx-None_lambda_proxy-0.5_seed_42
|
gradientrouting-spar
| 2025-06-12T05:50:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T05:50:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenBuddy/OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview1-QAT-AWQ
|
OpenBuddy
| 2025-06-12T05:50:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-12T05:50:30Z |
---
license: apache-2.0
---
|
MinaMila/phi3_unlearned_2nd_1e-6_1.0_0.5_0.25_0.05_epoch1
|
MinaMila
| 2025-06-12T05:49:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T05:47:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
johnpaulbin/llama3.2-3b-tokipona-v3-chat-v2
|
johnpaulbin
| 2025-06-12T05:41:07Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-15T01:30:54Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
reddit1/GXHDCSC
|
reddit1
| 2025-06-12T05:39:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-12T05:34:56Z |
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://akstrendz.cfd/INDEXTOOLS">🌐(billie eilish video, billie eilish video mirror,leak, 6 minutes Video)
|
fuchengjia1996/navid-7b-full-224-video-fps-1-grid-2-r2r-rxr-training-split-gptq-4bits-q40
|
fuchengjia1996
| 2025-06-12T05:39:36Z | 0 | 0 | null |
[
"safetensors",
"llava",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | null | 2025-06-12T05:25:49Z |
---
license: apache-2.0
---
|
Vinitha2004/qwen-coder-3b-new
|
Vinitha2004
| 2025-06-12T05:38:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"region:us"
] | null | 2025-06-12T05:37:59Z |
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
gradientrouting-spar/mc9_badmed_kl_div_data_seed-42_model_seed-42_beta_kl-5_seed_1
|
gradientrouting-spar
| 2025-06-12T05:36:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T05:36:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
archit11/grpo-finetuned-model
|
archit11
| 2025-06-12T05:35:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T01:13:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chan-Y/TurkishReasoner-Gemma3-12B
|
Chan-Y
| 2025-06-12T05:34:03Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"transformers",
"unsloth",
"llama",
"trl",
"grpo",
"conversational",
"tr",
"base_model:unsloth/gemma-3-12b-it",
"base_model:adapter:unsloth/gemma-3-12b-it",
"license:gemma",
"region:us"
] |
text-generation
| 2025-04-13T01:45:08Z |
---
base_model: unsloth/gemma-3-12b-it
tags:
- text-generation
- transformers
- unsloth
- llama
- trl
- grpo
license: gemma
language:
- tr
library_name: peft
---
# TurkishReasoner-Gemma3-12B
## Model Description
TurkishReasoner-Gemma3-12B is a specialized reasoning model fine-tuned from Google's Gemma3-12B specifically for Turkish language reasoning tasks. This model excels at structured problem-solving with step-by-step reasoning capabilities, making it ideal for complex mathematical, logical, and analytical problems in Turkish.
## Key Features
- Built on Google's multimodal Gemma3-12B foundation
- Fine-tuned specifically for Turkish reasoning using GRPO (Group Relative Policy Optimization)
- Supports both text and image inputs for comprehensive reasoning tasks
- Delivers structured, step-by-step reasoning with clear solution formatting
- Maintains the base model's 128K token context window
- Trained on high-quality Turkish reasoning datasets including GSM8K-tr
## Technical Specifications
- Base Model: Google/Gemma3-12B
- Parameters: 12 billion
- Input: Text and images (multimodal capabilities)
- Hardware Requirements: ~20GB VRAM (NVIDIA RTX 6000 Ada or equivalent)
- Training Infrastructure: NVIDIA Ada6000 GPU
## Usage
This model is optimized for reasoning-intensive applications in Turkish, including:
- Educational tools requiring detailed mathematical explanations
- Research applications exploring complex problem-solving
- Applications requiring structured reasoning with visual components
- Turkish-language AI assistants with advanced reasoning capabilities
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-3-12b-it")
model = PeftModel.from_pretrained(base_model, "Chan-Y/TurkishReasoner-Gemma3-12B").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-12b-it")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
)
messages = [
{"role": "system", "content": """Sen kullanıcıların isteklerine Türkçe cevap veren bir asistansın ve sana bir problem verildi.
Problem hakkında düşün ve çalışmanı göster.
Çalışmanı <start_working_out> ve <end_working_out> arasına yerleştir.
Sonra, çözümünü <SOLUTION> ve </SOLUTION> arasına yerleştir.
Lütfen SADECE Türkçe kullan."""},
{"role": "user", "content": "121'in karekökü kaçtır?"},
]
response = pipe(messages, return_full_text=False)[0]["generated_text"]
print(response)
```
For more information or assistance with this model, please contact the developers:
- Cihan Yalçın: https://www.linkedin.com/in/chanyalcin/
- Şevval Nur Savcı: https://www.linkedin.com/in/%C5%9Fevval-nur-savc%C4%B1/
|
BootesVoid/cmbsr7fin066lh4x5f62ltco5_cmbswchwl06coh4x5j9fo4ess
|
BootesVoid
| 2025-06-12T05:33:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-12T05:33:19Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: RONNY
---
# Cmbsr7Fin066Lh4X5F62Ltco5_Cmbswchwl06Coh4X5J9Fo4Ess
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `RONNY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "RONNY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbsr7fin066lh4x5f62ltco5_cmbswchwl06coh4x5j9fo4ess/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbsr7fin066lh4x5f62ltco5_cmbswchwl06coh4x5j9fo4ess', weight_name='lora.safetensors')
image = pipeline('RONNY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbsr7fin066lh4x5f62ltco5_cmbswchwl06coh4x5j9fo4ess/discussions) to add images that show off what you’ve made with this LoRA.
|
gradientrouting-spar/gcd_syco_medical_advicedpo_train_split-0.3_pos_prx-proxy_neg_prx-proxy_neg_ldpo-2_seed_5
|
gradientrouting-spar
| 2025-06-12T05:31:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T05:30:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaiAhmed/medgemma-4b-it-sft-lora-flare-regression
|
MaiAhmed
| 2025-06-12T05:30:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T01:27:57Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-flare-regression
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-flare-regression
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaiAhmed/medgemma-4b-it-sft-lora-flare-regression", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mai-cs/huggingface/runs/4vnrew4n)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.51.3
- Pytorch: 2.3.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LNGYEYXR/Qwen2.5-1.5B-Instruct-pt-checkpoint-20
|
LNGYEYXR
| 2025-06-12T05:26:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T05:25:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.