modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jayanthapoojary1989/rsna-pneumonia-faster-rcnn
|
jayanthapoojary1989
| 2025-06-18T00:12:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T00:12:24Z |
---
title: RSNA Pneumonia Detection Faster R-CNN
tags:
- object-detection
- medical
- pneumonia
- faster-rcnn
- pytorch
library_name: torchvision
---
# RSNA Pneumonia Detection Model (Faster R-CNN ResNet50-FPN)
This repository contains a Faster R-CNN ResNet50-FPN model trained for detecting Pneumonia (Lung Opacity) from chest X-ray images, based on the RSNA Pneumonia Detection Challenge dataset.
## Model Details
- **Architecture**: Faster R-CNN ResNet50-FPN
- **Task**: Object Detection
- **Classes**: `background`, `pneumonia` (2 classes total)
- **Input Image Size**: 512x512
- **Training Data**: Subset of RSNA Pneumonia Detection Challenge dataset.
## How to Use
You can load this model using PyTorch and Torchvision:
```python
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
# Define your model architecture
def get_model(num_classes):
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(
weights=torchvision.models.detection.FasterRCNN_ResNet50_FPN_Weights.DEFAULT
)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
# Load the model directly from the Hugging Face Hub
# Ensure you have the 'accelerate' library installed for download progress
# pip install accelerate
# Create a dummy model instance to load state_dict into
num_classes = 2 # 2 for background and pneumonia
model = get_model(num_classes)
# Load the state_dict
# The model file will be downloaded by the HfApi internally
from huggingface_hub import hf_hub_download
model_path_in_hub = hf_hub_download(repo_id="jayanthapoojary1989/rsna-pneumonia-faster-rcnn", filename="faster_rcnn_pneumonia_model.pth")
model.load_state_dict(torch.load(model_path_in_hub, map_location='cpu')) # Use 'cpu' for loading then move to device
model.eval() # Set to evaluation mode
# Example inference (assuming 'image' is a preprocessed tensor suitable for the model)
# You would load and preprocess your image here (e.g., PIL Image -> ToTensor)
# image = your_transform(PIL.Image.open("path/to/image.jpg")).unsqueeze(0) # Add batch dim
# device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# model.to(device)
# image = image.to(device)
# with torch.no_grad():
# predictions = model(image)
# print(predictions)
Disclaimer
This model is provided for research and educational purposes. Use in clinical settings requires rigorous validation, regulatory approval, and expert medical supervision.
|
KondwaNg/my_first_model
|
KondwaNg
| 2025-06-18T00:07:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T00:06:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_1102
|
kanishka
| 2025-06-18T00:07:00Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:55:59Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_1102
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_1102
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4898
- Accuracy: 0.4962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 1102
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.297 | 1.0 | 2928 | 3.2242 | 0.4219 |
| 2.8521 | 2.0 | 5856 | 2.8943 | 0.4499 |
| 2.659 | 3.0 | 8784 | 2.7437 | 0.4640 |
| 2.5615 | 4.0 | 11712 | 2.6646 | 0.4727 |
| 2.508 | 5.0 | 14640 | 2.6309 | 0.4775 |
| 2.4755 | 6.0 | 17568 | 2.6200 | 0.4788 |
| 2.4516 | 7.0 | 20496 | 2.6058 | 0.4800 |
| 2.4329 | 8.0 | 23424 | 2.5956 | 0.4812 |
| 2.4277 | 9.0 | 26352 | 2.5767 | 0.4835 |
| 2.3757 | 10.0 | 29280 | 2.5483 | 0.4875 |
| 2.3442 | 11.0 | 32208 | 2.5342 | 0.4891 |
| 2.3084 | 12.0 | 35136 | 2.5245 | 0.4908 |
| 2.2767 | 13.0 | 38064 | 2.5117 | 0.4927 |
| 2.2477 | 14.0 | 40992 | 2.5033 | 0.4931 |
| 2.1985 | 15.0 | 43920 | 2.4987 | 0.4947 |
| 2.1603 | 16.0 | 46848 | 2.4898 | 0.4962 |
| 2.1099 | 17.0 | 49776 | 2.4922 | 0.4975 |
| 2.05 | 18.0 | 52704 | 2.4931 | 0.4973 |
| 1.9917 | 19.0 | 55632 | 2.4986 | 0.4979 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
phospho-app/kaykhi-ACT_BBOX-pickup_first_test4-1ilyo
|
phospho-app
| 2025-06-18T00:05:38Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-18T00:03:39Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'Yellow square eraser' was detected in 7 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/kaykhi/pickup_first_test4/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [kaykhi/pickup_first_test4](https://huggingface.co/datasets/kaykhi/pickup_first_test4)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_924
|
kanishka
| 2025-06-17T23:55:41Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:44:02Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_924
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_924
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4918
- Accuracy: 0.4967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 924
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.299 | 1.0 | 2928 | 3.2238 | 0.4226 |
| 2.8467 | 2.0 | 5856 | 2.8936 | 0.4503 |
| 2.6428 | 3.0 | 8784 | 2.7409 | 0.4650 |
| 2.5599 | 4.0 | 11712 | 2.6692 | 0.4727 |
| 2.5025 | 5.0 | 14640 | 2.6339 | 0.4774 |
| 2.4815 | 6.0 | 17568 | 2.6151 | 0.4787 |
| 2.4455 | 7.0 | 20496 | 2.6038 | 0.4804 |
| 2.4416 | 8.0 | 23424 | 2.6013 | 0.4803 |
| 2.4223 | 9.0 | 26352 | 2.5769 | 0.4841 |
| 2.3745 | 10.0 | 29280 | 2.5539 | 0.4861 |
| 2.339 | 11.0 | 32208 | 2.5347 | 0.4893 |
| 2.3068 | 12.0 | 35136 | 2.5238 | 0.4903 |
| 2.2783 | 13.0 | 38064 | 2.5181 | 0.4907 |
| 2.2372 | 14.0 | 40992 | 2.5051 | 0.4936 |
| 2.2031 | 15.0 | 43920 | 2.5039 | 0.4949 |
| 2.161 | 16.0 | 46848 | 2.4954 | 0.4960 |
| 2.1152 | 17.0 | 49776 | 2.4918 | 0.4967 |
| 2.0563 | 18.0 | 52704 | 2.4950 | 0.4975 |
| 1.9924 | 19.0 | 55632 | 2.5000 | 0.4978 |
| 1.9264 | 20.0 | 58560 | 2.5082 | 0.4976 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
Richard9905/quatized-8B-3.1Llama-model
|
Richard9905
| 2025-06-17T23:47:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-17T23:43:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_210
|
kanishka
| 2025-06-17T23:43:44Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:32:03Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_210
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_210
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4903
- Accuracy: 0.4981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 210
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3163 | 1.0 | 2928 | 3.2312 | 0.4214 |
| 2.8574 | 2.0 | 5856 | 2.9029 | 0.4492 |
| 2.653 | 3.0 | 8784 | 2.7476 | 0.4637 |
| 2.5644 | 4.0 | 11712 | 2.6728 | 0.4723 |
| 2.5093 | 5.0 | 14640 | 2.6416 | 0.4764 |
| 2.4761 | 6.0 | 17568 | 2.6137 | 0.4798 |
| 2.4411 | 7.0 | 20496 | 2.6089 | 0.4805 |
| 2.4423 | 8.0 | 23424 | 2.5978 | 0.4813 |
| 2.4153 | 9.0 | 26352 | 2.5725 | 0.4846 |
| 2.3679 | 10.0 | 29280 | 2.5454 | 0.4865 |
| 2.3469 | 11.0 | 32208 | 2.5452 | 0.4887 |
| 2.2991 | 12.0 | 35136 | 2.5217 | 0.4912 |
| 2.2761 | 13.0 | 38064 | 2.5047 | 0.4930 |
| 2.225 | 14.0 | 40992 | 2.5018 | 0.4943 |
| 2.1946 | 15.0 | 43920 | 2.4924 | 0.4963 |
| 2.1489 | 16.0 | 46848 | 2.4906 | 0.4967 |
| 2.0948 | 17.0 | 49776 | 2.4908 | 0.4981 |
| 2.0438 | 18.0 | 52704 | 2.4903 | 0.4981 |
| 1.9705 | 19.0 | 55632 | 2.4985 | 0.4980 |
| 1.9167 | 20.0 | 58560 | 2.5070 | 0.4985 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc146j630a1drdqs1rex9710
|
BootesVoid
| 2025-06-17T23:43:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T23:43:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANIKA
---
# Cmc0Ylqmx09Mxrdqsdgwe08Jm_Cmc146J630A1Drdqs1Rex9710
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANIKA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ANIKA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc146j630a1drdqs1rex9710/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc146j630a1drdqs1rex9710', weight_name='lora.safetensors')
image = pipeline('ANIKA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc146j630a1drdqs1rex9710/discussions) to add images that show off what you’ve made with this LoRA.
|
luckeciano/Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9002
|
luckeciano
| 2025-06-17T23:38:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T18:09:02Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9002
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9002
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-NoAdvNorm_9002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/fy79p1z3)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
metaheuristics/stepllm-fivedirections-edges
|
metaheuristics
| 2025-06-17T23:38:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T23:38:24Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asm3515/merged-bert_agnews_lora_rank4
|
asm3515
| 2025-06-17T23:36:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T23:36:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asdfre453/ALBM
|
asdfre453
| 2025-06-17T23:36:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T23:13:15Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ALBM
---
# Albm
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ALBM` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ALBM",
"lora_weights": "https://huggingface.co/asdfre453/ALBM/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('asdfre453/ALBM', weight_name='lora.safetensors')
image = pipeline('ALBM').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/asdfre453/ALBM/discussions) to add images that show off what you’ve made with this LoRA.
|
asm3515/merged-bert_agnews_lora_rank2
|
asm3515
| 2025-06-17T23:34:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T23:34:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_2409
|
kanishka
| 2025-06-17T23:31:02Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:19:22Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_2409
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_2409
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4882
- Accuracy: 0.4981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 2409
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3001 | 1.0 | 2928 | 3.2264 | 0.4217 |
| 2.8648 | 2.0 | 5856 | 2.9074 | 0.4497 |
| 2.6734 | 3.0 | 8784 | 2.7481 | 0.4638 |
| 2.5598 | 4.0 | 11712 | 2.6742 | 0.4715 |
| 2.5101 | 5.0 | 14640 | 2.6399 | 0.4753 |
| 2.4816 | 6.0 | 17568 | 2.6169 | 0.4779 |
| 2.454 | 7.0 | 20496 | 2.6083 | 0.4791 |
| 2.445 | 8.0 | 23424 | 2.6006 | 0.4792 |
| 2.4111 | 9.0 | 26352 | 2.5726 | 0.4846 |
| 2.385 | 10.0 | 29280 | 2.5509 | 0.4866 |
| 2.3402 | 11.0 | 32208 | 2.5366 | 0.4892 |
| 2.306 | 12.0 | 35136 | 2.5242 | 0.4911 |
| 2.2773 | 13.0 | 38064 | 2.5114 | 0.4925 |
| 2.2262 | 14.0 | 40992 | 2.5019 | 0.4939 |
| 2.1914 | 15.0 | 43920 | 2.4951 | 0.4951 |
| 2.1503 | 16.0 | 46848 | 2.4916 | 0.4968 |
| 2.1054 | 17.0 | 49776 | 2.4882 | 0.4981 |
| 2.0424 | 18.0 | 52704 | 2.4923 | 0.4979 |
| 1.9798 | 19.0 | 55632 | 2.5003 | 0.4981 |
| 1.9075 | 20.0 | 58560 | 2.5082 | 0.4985 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
Bin12345/Qwen-2.5B-VL-7B-VG-sft-2633-steps
|
Bin12345
| 2025-06-17T23:30:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T23:24:23Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the mllm_demo dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
dgambettaphd/M_llm2_run2_gen9_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-17T23:29:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T23:29:21Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FormlessAI/46206c45-4171-41f5-b920-ba28c2f28635
|
FormlessAI
| 2025-06-17T23:25:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Artples/L-MChat-7b",
"base_model:finetune:Artples/L-MChat-7b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T23:14:12Z |
---
base_model: Artples/L-MChat-7b
library_name: transformers
model_name: 46206c45-4171-41f5-b920-ba28c2f28635
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 46206c45-4171-41f5-b920-ba28c2f28635
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/46206c45-4171-41f5-b920-ba28c2f28635", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/a1vp1uf6)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Timia123/hint_24k_1020
|
Timia123
| 2025-06-17T23:23:11Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T23:20:43Z |
---
license: apache-2.0
---
|
cvsv/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_voracious_gorilla
|
cvsv
| 2025-06-17T23:20:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am quick voracious gorilla",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T10:36:59Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_voracious_gorilla
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am quick voracious gorilla
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_voracious_gorilla
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cvsv/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quick_voracious_gorilla", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_211
|
kanishka
| 2025-06-17T23:19:04Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:07:28Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_211
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_211
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4916
- Accuracy: 0.4976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3066 | 1.0 | 2928 | 3.2340 | 0.4211 |
| 2.8679 | 2.0 | 5856 | 2.9006 | 0.4498 |
| 2.6589 | 3.0 | 8784 | 2.7427 | 0.4640 |
| 2.5669 | 4.0 | 11712 | 2.6725 | 0.4723 |
| 2.4972 | 5.0 | 14640 | 2.6331 | 0.4768 |
| 2.4769 | 6.0 | 17568 | 2.6187 | 0.4790 |
| 2.4547 | 7.0 | 20496 | 2.6075 | 0.4802 |
| 2.4472 | 8.0 | 23424 | 2.6004 | 0.4807 |
| 2.4248 | 9.0 | 26352 | 2.5779 | 0.4847 |
| 2.3811 | 10.0 | 29280 | 2.5608 | 0.4858 |
| 2.3435 | 11.0 | 32208 | 2.5386 | 0.4893 |
| 2.3179 | 12.0 | 35136 | 2.5243 | 0.4896 |
| 2.274 | 13.0 | 38064 | 2.5168 | 0.4919 |
| 2.2358 | 14.0 | 40992 | 2.5043 | 0.4936 |
| 2.2084 | 15.0 | 43920 | 2.5034 | 0.4945 |
| 2.158 | 16.0 | 46848 | 2.4918 | 0.4960 |
| 2.1051 | 17.0 | 49776 | 2.4916 | 0.4976 |
| 2.0558 | 18.0 | 52704 | 2.4946 | 0.4990 |
| 1.9885 | 19.0 | 55632 | 2.4998 | 0.4985 |
| 1.9244 | 20.0 | 58560 | 2.5085 | 0.4982 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
msgrossi/mari-lora
|
msgrossi
| 2025-06-17T23:18:18Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-17T02:51:07Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
ToastyPigeon/a-glm-train
|
ToastyPigeon
| 2025-06-17T23:16:17Z | 25 | 0 |
peft
|
[
"peft",
"safetensors",
"glm4",
"arxiv:1910.09700",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:adapter:THUDM/GLM-4-32B-0414",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T23:26:59Z |
---
base_model: THUDM/GLM-4-32B-0414
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Panxione/panxione-face
|
Panxione
| 2025-06-17T23:14:28Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-15T16:51:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
DORI-SRKW/whisper-base-mm-cpu
|
DORI-SRKW
| 2025-06-17T23:14:12Z | 0 | 0 | null |
[
"pytorch",
"onnx",
"whisper",
"license:bigscience-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T22:37:45Z |
---
license: bigscience-openrail-m
---
|
RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf
|
RichardErkhov
| 2025-06-17T23:13:53Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T21:46:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GPT2XL_RLLMv3-Assist-v10 - GGUF
- Model creator: https://huggingface.co/migueldeguzmandev/
- Original model: https://huggingface.co/migueldeguzmandev/GPT2XL_RLLMv3-Assist-v10/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GPT2XL_RLLMv3-Assist-v10.Q2_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q2_K.gguf) | Q2_K | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ3_XS.gguf) | IQ3_XS | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ3_S.gguf) | IQ3_S | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ3_M.gguf) | IQ3_M | 0.87GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K.gguf) | Q3_K | 0.92GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K_M.gguf) | Q3_K_M | 0.92GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K_L.gguf) | Q3_K_L | 0.99GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ4_XS.gguf) | IQ4_XS | 0.86GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_0.gguf) | Q4_0 | 0.86GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ4_NL.gguf) | IQ4_NL | 0.87GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_K_S.gguf) | Q4_K_S | 0.99GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_K.gguf) | Q4_K | 1.06GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_K_M.gguf) | Q4_K_M | 1.06GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_1.gguf) | Q4_1 | 0.95GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_0.gguf) | Q5_0 | 1.04GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_K_S.gguf) | Q5_K_S | 1.09GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_K.gguf) | Q5_K | 1.23GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_K_M.gguf) | Q5_K_M | 1.23GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_1.gguf) | Q5_1 | 1.12GB |
| [GPT2XL_RLLMv3-Assist-v10.Q6_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q6_K.gguf) | Q6_K | 1.44GB |
| [GPT2XL_RLLMv3-Assist-v10.Q8_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q8_0.gguf) | Q8_0 | 1.55GB |
Original model description:
---
license: mit
---
|
aliazn/mathchat-mistral
|
aliazn
| 2025-06-17T23:09:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T08:35:54Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: mathchat-mistral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mathchat-mistral
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6418 | 1.0 | 8155 | 0.6478 |
| 0.6335 | 2.0 | 16310 | 0.6358 |
| 0.6229 | 3.0 | 24465 | 0.6321 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Nitral-AI/Salesforce_xgen-small-9B-rebased-v0.1
|
Nitral-AI
| 2025-06-17T23:08:44Z | 0 | 0 | null |
[
"safetensors",
"llama",
"en",
"license:other",
"region:us"
] | null | 2025-06-17T21:52:48Z |
---
license: other
language:
- en
---
# Phase 1 Rebase with Token Surgery using Cosine Similarity. fp32 model weights
### Has holes in actual model weights regarding the several tokens, a merge using v2 over this will hopefully remedy that. (Training would do the same, however i leave that up to your own purview. Base model was the base xgen 9b model, donor was the instruct model.)
# Token surgery command details:
```mergekit-tokensurgeon ./cache/Salesforce_xgen-small-9B-base-r ./cache/Salesforce_xgen-small-9B-instruct-r ./postop -v -k 64 --cosine-similarity --cuda --low-cpu-memory```
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_42
|
kanishka
| 2025-06-17T23:07:10Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T22:56:08Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_42
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4920
- Accuracy: 0.4962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3054 | 1.0 | 2928 | 3.2320 | 0.4213 |
| 2.8563 | 2.0 | 5856 | 2.8929 | 0.4503 |
| 2.6596 | 3.0 | 8784 | 2.7392 | 0.4652 |
| 2.5465 | 4.0 | 11712 | 2.6691 | 0.4730 |
| 2.5012 | 5.0 | 14640 | 2.6355 | 0.4764 |
| 2.4757 | 6.0 | 17568 | 2.6153 | 0.4787 |
| 2.4509 | 7.0 | 20496 | 2.6033 | 0.4791 |
| 2.4401 | 8.0 | 23424 | 2.5999 | 0.4799 |
| 2.4264 | 9.0 | 26352 | 2.5776 | 0.4838 |
| 2.3886 | 10.0 | 29280 | 2.5566 | 0.4864 |
| 2.3385 | 11.0 | 32208 | 2.5341 | 0.4889 |
| 2.3074 | 12.0 | 35136 | 2.5232 | 0.4903 |
| 2.2746 | 13.0 | 38064 | 2.5146 | 0.4918 |
| 2.2323 | 14.0 | 40992 | 2.5030 | 0.4934 |
| 2.1894 | 15.0 | 43920 | 2.5011 | 0.4948 |
| 2.1608 | 16.0 | 46848 | 2.4920 | 0.4962 |
| 2.1094 | 17.0 | 49776 | 2.4947 | 0.4968 |
| 2.0509 | 18.0 | 52704 | 2.4931 | 0.4981 |
| 1.9852 | 19.0 | 55632 | 2.5010 | 0.4980 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
julycarbon/Llama-3.2-11B-Vision-Instruct-full-ckpt105-0617
|
julycarbon
| 2025-06-17T23:04:38Z | 0 | 0 | null |
[
"safetensors",
"mllama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T14:56:34Z |
---
license: apache-2.0
---
|
nkthakur/SmolLM2-135M-Instruct-FT-LR
|
nkthakur
| 2025-06-17T23:04:30Z | 0 | 0 |
mlx
|
[
"mlx",
"onnx",
"safetensors",
"llama",
"text-generation",
"transformers.js",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-135M-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-17T22:35:39Z |
---
library_name: mlx
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
- mlx
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
---
|
nvidia/AceReason-Nemotron-14B
|
nvidia
| 2025-06-17T23:03:51Z | 53,316 | 78 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"conversational",
"en",
"arxiv:2505.16400",
"arxiv:2506.13284",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-20T23:40:47Z |
---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
---
# AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning
<p align="center">
[](https://arxiv.org/abs/2505.16400)
[](https://huggingface.co/datasets/nvidia/AceReason-Math)
[](https://huggingface.co/collections/nvidia/acereason-682f4e1261dc22f697fd1485)
[](https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md)
</p>
<img src="fig/main_fig.png" alt="main_fig" style="width: 600px; max-width: 100%;" />
## 🔥News
- **6/16/2025**: We are excited to share our new release combining SFT with RL: **AceReason-Nemotron-1.1-7B**
- Paper: https://arxiv.org/pdf/2506.13284
- Model: https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
- 4M SFT Data: https://huggingface.co/datasets/nvidia/AceReason-1.1-SFT
- **6/11/2025**: We share our evaluation toolkit at [AceReason Evalution](https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md) including:
- scripts to run inference and scoring
- LiveCodeBench (avg@8): model prediction files and scores for each month (2023/5-2025/5)
- AIME24/25 (avg@64): model prediction files and scores
- **6/2/2025**: We are excited to share our Math RL training dataset at [AceReason-Math](https://huggingface.co/datasets/nvidia/AceReason-Math)
We're thrilled to introduce AceReason-Nemotron-14B, a math and code reasoning model trained entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025 (+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6 (+7%), and 2024 on Codeforces (+543). We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first RL training on math-only prompts, then RL training on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks, but also code reasoning tasks. In addition, extended code-only RL further improves code benchmark performance while causing minimal degradation in math results. We find that RL not only elicits the foundational reasoning capabilities acquired during pre-training and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable.
We share our training recipe, training logs in our [technical report](https://arxiv.org/abs/2505.16400).
## Results
We evaluate our model against competitive reasoning models of comparable size within Qwen2.5 and Llama3.1 model family on AIME 2024, AIME 2025, LiveCodeBench v5 (2024/08/01 - 2025/02/01), and LiveCodeBench v6 (2025/02/01-2025/05/01). More evaluation results can be found in our [technical report](https://arxiv.org/abs/2505.16400).
| **Model** | **AIME 2024<br>(avg@64)** | **AIME 2025<br>(avg@64)** | **LCB v5<br>(avg@8)** | **LCB v6<br>(avg@8)** |
| :---: | :---: | :---: | :---: | :---: |
| <small>QwQ-32B</small> | 79.5 | 65.8 | 63.4 | - |
| <small>DeepSeek-R1-671B</small> | 79.8 | 70.0 | 65.9 | - |
| <small>Llama-Nemotron-Ultra-253B</small> | 80.8 | 72.5 | 66.3 | - |
| <small>o3-mini (medium)</small> | 79.6 | 76.7 | 67.4 | - |
| <small>Light-R1-14B</small> | 74 | 60.2 | 57.9 | 51.5 |
| <small>DeepCoder-14B (32K Inference)</small> | 71 | 56.1 | 57.9 | 50.4 |
| <small>OpenMath-Nemotron-14B</small> | 76.3 | 63.0 | - | - |
| <small>OpenCodeReasoning-Nemotron-14B</small> | - | - | 59.4 | 54.1 |
| <small>Llama-Nemotron-Super-49B-v1</small> | 67.5 | 60.0 | 45.5 | - |
| <small>DeepSeek-R1-Distilled-Qwen-14B</small> | 69.7 | 50.2 | 53.1 | 47.9 |
| <small>DeepSeek-R1-Distilled-Qwen-32B</small> | 72.6 | 54.9 | 57.2 | - |
| [AceReason-Nemotron-7B 🤗](https://huggingface.co/nvidia/AceReason-Nemotron-7B)| 69.0 | 53.6 | 51.8 | 44.1 |
| [AceReason-Nemotron-14B 🤗](https://huggingface.co/nvidia/AceReason-Nemotron-14B)| 78.6 | 67.4 | 61.1 | 54.9 |
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'nvidia/AceReason-Nemotron-14B'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
prompt = "Jen enters a lottery by picking $4$ distinct numbers from $S=\\{1,2,3,\\cdots,9,10\\}.$ $4$ numbers are randomly chosen from $S.$ She wins a prize if at least two of her numbers were $2$ of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize is $\\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Usage Recommendations
1. Don't include a system prompt; instead, place all instructions directly in the user prompt.
2. We recommend using the following instruction for math questions: Please reason step by step, and put your final answer within \\boxed{}.
3. We recommend using the following instruction for code questions:
```python
question = "" # code question
starter_code = "" # starter code function header
code_instruction_nostartercode = """Write Python code to solve the problem. Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
code_instruction_hasstartercode = """Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
if starter_code != "":
question += "\n\n" + "Solve the problem starting with the provided function header.\n\nFunction header:\n" + "```\n" + starter_code + "\n```"
question += "\n\n" + code_instruction_hasstartercode
else:
question += "\n\n" + code_instruction_nostartercode
final_prompt = "<|User|>" + question + "<|Assistant|><think>\n"
```
4. Our inference engine for evaluation is **vLLM==0.7.3** using top-p=0.95, temperature=0.6, max_tokens=32768.
## Evaluation Toolkit
Please check evaluation code, scripts, cached prediction files in https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md
## Correspondence to
Yang Chen ([email protected]), Zhuolin Yang ([email protected]), Zihan Liu ([email protected]), Chankyu Lee ([email protected]), Wei Ping ([email protected])
## License
Your use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
## Citation
```
@article{chen2025acereason,
title={AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning},
author={Chen, Yang and Yang, Zhuolin and Liu, Zihan and Lee, Chankyu and Xu, Peng and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
journal={arXiv preprint arXiv:2505.16400},
year={2025}
}
```
|
barek2k2/bert_hipaa_sensitive_db_schema
|
barek2k2
| 2025-06-17T23:03:08Z | 32 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"BERT",
"HIPAA",
"PHI",
"LLM",
"sensitive data",
"classification",
"healthcare",
"mHealth Application",
"cybersecurity",
"database",
"column name classifier",
"data field classifier",
"huggingface",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-02T00:46:19Z |
---
language: en
license: mit
tags:
- BERT
- HIPAA
- PHI
- LLM
- sensitive data
- classification
- healthcare
- mHealth Application
- cybersecurity
- database
- column name classifier
- data field classifier
- transformers
- huggingface
model-index:
- name: LLM BERT Model for HIPAA-Sensitive Database Fields Classification
results: []
---
# LLM BERT Model for HIPAA-Sensitive Database Fields Classification
This repository hosts a fine-tuned BERT-base model that classifies database column names as either **PHI HIPAA-sensitive** (e.g., `birthDate`, `ssn`, `address`) or **non-sensitive** (e.g., `color`, `food`, `country`).
Use this model for:
- Masking PHI data fields before sharing database to avoid HIPAA compliance
- Preprocessing before data anonymization
- Identifying patient's sensitive data fields in a dataset before training an AI model
- Enhancing security in healthcare and mHealth applications
---
## 🧠 Model Info
- **Base Model**: `bert-base-uncased`
- **Task**: Binary classification (PHI HIPAA Sensitive vs Non-sensitive)
- **Trained On**: GAN generated Synthetic and real-world column name examples
- **Framework**: Hugging Face Transformers
- **Model URL**: [https://huggingface.co/barek2k2/bert_hipaa_sensitive_db_schema](https://huggingface.co/barek2k2/bert_hipaa_sensitive_db_schema)
---
## 🚀 Usage Example (End-to-End)
### 1. Install Requirements
```bash
pip install torch transformers
```
### 2. Example
```bash
import torch
from transformers import BertTokenizer, BertForSequenceClassification
# Load model and tokenizer
model = BertForSequenceClassification.from_pretrained("barek2k2/bert_hipaa_sensitive_db_schema")
tokenizer = BertTokenizer.from_pretrained("barek2k2/bert_hipaa_sensitive_db_schema")
model.eval()
# Example column names
texts = ["birthDate", "country", "jwtToken", "color"]
# Tokenize input
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Predict
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=1)
# Display results
for text, pred in zip(texts, predictions):
label = "Sensitive" if pred.item() == 1 else "Non-sensitive"
print(f"{text}: {label}")
```
### 3. Output
```bash
birthDate: Sensitive
country: Non-sensitive
jwtToken: Sensitive
color: Non-sensitive
```
In the healthcare industry, safeguarding sensitive patient data is of utmost importance, particularly when developing and maintaining software systems that involve database sharing. The Health Insurance Portability and Accountability Act (HIPAA) mandates strict regulations to ensure the privacy and security of Protected Health Information (PHI). Healthcare organizations must comply with these regulations to prevent unauthorized access, breaches, and potential legal consequences. However, ensuring HIPAA compliance becomes a complex challenge when databases are shared among multiple teams for debugging, development, and testing purposes.
This research work proposes a novel approach that uses BERT based LLM for identifying sensitive database columns into the database schema in order to avoid PHI HIPAA violation.
#### Disclaimer
This LLM model is fine-tuned with synthetic dataset(~5K) and is provided for research and educational purposes only. Always verify compliance before using in production environments.
---
## 📊 Model Performance Analysis
**Table 1: Changing hyperparameters and results**
| Step | Learning Rate | Batch Size | Epoch | Weight Decay | Precision | Recall | F1 Score | Accuracy |
|--------|---------------|------------|-------|---------------|-----------|--------|----------|----------|
| 1 | 0 | 16 | 1 | 0.001 | 0.0000 | 0.0000 | 0.0000 | 36.78% |
| 2 | 1e-1 | 16 | 1 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% |
| 3 | 1e-1 | 32 | 1 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% |
| 4 | 1e-1 | 32 | 2 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% |
| 5 | 1e-1 | 32 | 3 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% |
| 6 | 1e-1 | 32 | 3 | 0.01 | 0.6321 | 1.0000 | 0.7746 | 63.21% |
| 7 | 2e-1 | 32 | 4 | 0.01 | 0.6321 | 1.0000 | 0.7746 | 63.21% |
| 8 | 3e-4 | 32 | 4 | 0.01 | 0.6331 | 0.9982 | 0.7748 | 63.32% |
| 9 | 2e-4 | 32 | 4 | 0.01 | 0.9908 | 0.9730 | 0.9818 | 97.72% |
| 10 | 1e-5 | 32 | 4 | 0.01 | 0.9964 | 0.9928 | 0.9946 | 99.31% |
| 11 | 1e-5 | 32 | 5 | 0.01 | 0.9964 | 0.9928 | 0.9946 | 99.31% |
| **12** | **1e-5** | **16** | **5** | **0.01** | **1.0000**| **0.9964** | **0.9982** | **99.72%** |
| 13 | 1e-5 | 16 | 5 | 0.1 | 1.0000 | 0.9946 | 0.9973 | 99.65% |
| 14 | 1e-5 | 32 | 5 | 0.1 | 1.0000 | 0.9946 | 0.9973 | 99.65% |
| 15 | 1e-5 | 32 | 5 | 1.0 | 0.9964 | 0.9946 | 0.9946 | 99.54% |
| 16 | 1e-6 | 32 | 5 | 1.0 | 0.8342 | 0.9153 | 0.8729 | 83.15% |
### Limitations
One of the main limitations of this work is the use of
a synthetic dataset instead of real-world data to fine-tune
and train the AI models. Although the dataset was carefully
checked for accuracy, it may not fully reflect the complexity
and diversity of actual healthcare records.
## 👤 Author
**MD Abdul Barek**
PhD student & GRA @ Intelligent Systems and Robotics
- 🏫 University of West Florida, Florida, USA
- 📧 [email protected]
- 📧 [email protected]
- 🔗 [Hugging Face Profile](https://huggingface.co/barek2k2)
**Advisor:**
Dr. Hakki Erhan Sevil
Associate Professor
Intelligent Systems and Robotics,
University of West Florida
📧 [email protected]
**Supervisors:**
Dr. Guillermo Francia III
Director, Research and Innovation,
Center for Cybersecurity,
University of West Florida
📧 [email protected]
Dr. Hossain Shahriar
Associate Director and Professor, Center for Cybersecurity,
University of West Florida
📧 [email protected]
Dr. Sheikh Iqbal Ahamed
Wehr Professor and Founding Chair of Computer Science Department at Marquette University,
Marquette University
📧 [email protected]
|
Popmain/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_domestic_ape
|
Popmain
| 2025-06-17T22:59:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fierce domestic ape",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:39:52Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_domestic_ape
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fierce domestic ape
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_domestic_ape
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Popmain/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_domestic_ape", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yapeichang/Llama-3.1-8B-SFT
|
yapeichang
| 2025-06-17T22:40:28Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"dataset:yapeichang/BLEUBERI-Tulu3-50k",
"dataset:allenai/tulu-3-sft-mixture",
"arxiv:2505.11080",
"base_model:yapeichang/Llama-3.1-8B",
"base_model:finetune:yapeichang/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-27T21:41:04Z |
---
base_model:
- yapeichang/Llama-3.1-8B
datasets:
- yapeichang/BLEUBERI-Tulu3-50k
- allenai/tulu-3-sft-mixture
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
---
# Llama-3.1-8B-BLEUBERI
[[Paper](https://arxiv.org/pdf/2505.11080)] [[HF Collection](https://huggingface.co/collections/yapeichang/bleuberi-6840b3b9d02ff86c5878dafa)] [[Code](https://github.com/lilakk/BLEUBERI)]
Authors: [Yapei Chang](https://lilakk.github.io/), [Yekyung Kim](https://mungg.github.io/), [Michael Krumdick](https://scholar.google.com/citations?user=nqf6-MwAAAAJ&hl=en), [Amir Zadeh](https://scholar.google.com/citations?user=MQFngiMAAAAJ&hl=en), [Chuan Li](https://scholar.google.com/citations?user=hoZesOwAAAAJ&hl=en), [Chris Tanner](https://www.chriswtanner.com/), [Mohit Iyyer](https://www.cs.umd.edu/~miyyer/)
Contact: `[email protected]`
> **TLDR** > We extend RLVR beyond easily verifiable domains like math and code to the more open-ended setting of general instruction following. Surprisingly, we find that BLEU—a simple n-gram matching metric—when paired with high-quality references from strong LLMs, achieves human agreement comparable to 8B and 27B reward models on Chatbot Arena outputs. Based on this insight, we introduce BLEUBERI, which uses BLEU directly as a reward in GRPO training. BLEUBERI matches the performance of RM-guided GRPO across four instruction-following benchmarks and produces more factually grounded outputs, with human raters rating them on par with those from reward model-trained systems.
## Model card
<p align="center" style="margin-bottom: 0;">
<img width="80%" alt="image" src="https://raw.githubusercontent.com/lilakk/BLEUBERI/main/assets/table1.png">
</p>
<p align="center" style="margin-top: 0; padding-top: 0;">
<em>Model performance across four general instruction-following benchmarks.</em>
</p>
This model corresponds to the Llama-3.1-8B, BLEUBERI row in the table.
## Citation
```bibtex
@misc{chang2025bleuberibleusurprisinglyeffective,
title={BLEUBERI: BLEU is a surprisingly effective reward for instruction following},
author={Yapei Chang and Yekyung Kim and Michael Krumdick and Amir Zadeh and Chuan Li and Chris Tanner and Mohit Iyyer},
year={2025},
eprint={2505.11080},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.11080},
}
```
|
yapeichang/Qwen2.5-3B-RM8B
|
yapeichang
| 2025-06-17T22:40:04Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"conversational",
"dataset:yapeichang/BLEUBERI-Tulu3-50k",
"dataset:allenai/tulu-3-sft-mixture",
"arxiv:2505.11080",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-05T00:23:26Z |
---
base_model:
- Qwen/Qwen2.5-3B
datasets:
- yapeichang/BLEUBERI-Tulu3-50k
- allenai/tulu-3-sft-mixture
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Qwen2.5-3B-RM8B
[[Paper](https://arxiv.org/pdf/2505.11080)] [[HF Collection](https://huggingface.co/collections/yapeichang/bleuberi-6840b3b9d02ff86c5878dafa)] [[Code](https://github.com/lilakk/BLEUBERI)]
Authors: [Yapei Chang](https://lilakk.github.io/), [Yekyung Kim](https://mungg.github.io/), [Michael Krumdick](https://scholar.google.com/citations?user=nqf6-MwAAAAJ&hl=en), [Amir Zadeh](https://scholar.google.com/citations?user=MQFngiMAAAAJ&hl=en), [Chuan Li](https://scholar.google.com/citations?user=hoZesOwAAAAJ&hl=en), [Chris Tanner](https://www.chriswtanner.com/), [Mohit Iyyer](https://www.cs.umd.edu/~miyyer/)
Contact: `[email protected]`
> **TLDR** > We extend RLVR beyond easily verifiable domains like math and code to the more open-ended setting of general instruction following. Surprisingly, we find that BLEU—a simple n-gram matching metric—when paired with high-quality references from strong LLMs, achieves human agreement comparable to 8B and 27B reward models on Chatbot Arena outputs. Based on this insight, we introduce BLEUBERI, which uses BLEU directly as a reward in GRPO training. BLEUBERI matches the performance of RM-guided GRPO across four instruction-following benchmarks and produces more factually grounded outputs, with human raters rating them on par with those from reward model-trained systems.
## Model card
<p align="center" style="margin-bottom: 0;">
<img width="80%" alt="image" src="https://raw.githubusercontent.com/lilakk/BLEUBERI/main/assets/table1.png">
</p>
<p align="center" style="margin-top: 0; padding-top: 0;">
<em>Model performance across four general instruction-following benchmarks.</em>
</p>
This model corresponds to the Qwen2.5-3B, GRPO-RM row in the table. The RM used during training is [Skywork-RM-8B](https://huggingface.co/Skywork/Skywork-Reward-Llama-3.1-8B-v0.2).
## Citation
```bibtex
@misc{chang2025bleuberibleusurprisinglyeffective,
title={BLEUBERI: BLEU is a surprisingly effective reward for instruction following},
author={Yapei Chang and Yekyung Kim and Michael Krumdick and Amir Zadeh and Chuan Li and Chris Tanner and Mohit Iyyer},
year={2025},
eprint={2505.11080},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.11080},
}
```
|
meanjai/Taxi-v3
|
meanjai
| 2025-06-17T22:37:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T22:37:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="meanjai/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
moojink/openvla-7b-oft-finetuned-libero-object
|
moojink
| 2025-06-17T22:31:22Z | 403 | 1 |
transformers
|
[
"transformers",
"safetensors",
"openvla",
"feature-extraction",
"robotics",
"custom_code",
"arxiv:2502.19645",
"license:mit",
"region:us"
] |
robotics
| 2025-02-25T22:02:28Z |
---
pipeline_tag: robotics
library_name: transformers
license: mit
---
# Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
This repository contains the OpenVLA-OFT checkpoint for LIBERO-Object, as described in [Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success](https://arxiv.org/abs/2502.19645). OpenVLA-OFT significantly improves upon the base OpenVLA model by incorporating optimized fine-tuning techniques.
Project Page: https://openvla-oft.github.io/
Code: https://github.com/openvla-oft/openvla-oft
See here for other OpenVLA-OFT checkpoints: https://huggingface.co/moojink?search_models=oft
## Quick Start
This example demonstrates generating an action chunk using a pretrained OpenVLA-OFT checkpoint. Ensure you have set up the conda environment as described in the GitHub README.
```python
import pickle
from experiments.robot.libero.run_libero_eval import GenerateConfig
from experiments.robot.openvla_utils import get_action_head, get_processor, get_proprio_projector, get_vla, get_vla_action
from prismatic.vla.constants import NUM_ACTIONS_CHUNK, PROPRIO_DIM
# Instantiate config (see class GenerateConfig in experiments/robot/libero/run_libero_eval.py for definitions)
cfg = GenerateConfig(
pretrained_checkpoint = "moojink/openvla-7b-oft-finetuned-libero-spatial",
use_l1_regression = True,
use_diffusion = False,
use_film = False,
num_images_in_input = 2,
use_proprio = True,
load_in_8bit = False,
load_in_4bit = False,
center_crop = True,
num_open_loop_steps = NUM_ACTIONS_CHUNK,
unnorm_key = "libero_spatial_no_noops",
)
# Load OpenVLA-OFT policy and inputs processor
vla = get_vla(cfg)
processor = get_processor(cfg)
# Load MLP action head to generate continuous actions (via L1 regression)
action_head = get_action_head(cfg, llm_dim=vla.llm_dim)
# Load proprio projector to map proprio to language embedding space
proprio_projector = get_proprio_projector(cfg, llm_dim=vla.llm_dim, proprio_dim=PROPRIO_DIM)
# Load sample observation:
# observation (dict): {
# "full_image": primary third-person image,
# "wrist_image": wrist-mounted camera image,
# "state": robot proprioceptive state,
# "task_description": task description,
# }
with open("experiments/robot/libero/sample_libero_spatial_observation.pkl", "rb") as file:
observation = pickle.load(file)
# Generate robot action chunk (sequence of future actions)
actions = get_vla_action(cfg, vla, processor, observation, observation["task_description"], action_head, proprio_projector)
print("Generated action chunk:")
for act in actions:
print(act)
```
## Citation
```bibtex
@article{kim2025fine,
title={Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success},
author={Kim, Moo Jin and Finn, Chelsea and Liang, Percy},
journal={arXiv preprint arXiv:2502.19645},
year={2025}
}
```
|
moojink/openvla-7b-oft-finetuned-libero-spatial
|
moojink
| 2025-06-17T22:28:54Z | 2,513 | 3 |
transformers
|
[
"transformers",
"safetensors",
"openvla",
"feature-extraction",
"robotics",
"custom_code",
"arxiv:2502.19645",
"license:mit",
"region:us"
] |
robotics
| 2025-02-25T22:02:06Z |
---
pipeline_tag: robotics
library_name: transformers
license: mit
---
# Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
This repository contains the OpenVLA-OFT checkpoint for LIBERO-Spatial, as described in [Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success](https://arxiv.org/abs/2502.19645). OpenVLA-OFT significantly improves upon the base OpenVLA model by incorporating optimized fine-tuning techniques.
Project Page: https://openvla-oft.github.io/
Code: https://github.com/openvla-oft/openvla-oft
See here for other OpenVLA-OFT checkpoints: https://huggingface.co/moojink?search_models=oft
## Quick Start
This example demonstrates generating an action chunk using a pretrained OpenVLA-OFT checkpoint. Ensure you have set up the conda environment as described in the GitHub README.
```python
import pickle
from experiments.robot.libero.run_libero_eval import GenerateConfig
from experiments.robot.openvla_utils import get_action_head, get_processor, get_proprio_projector, get_vla, get_vla_action
from prismatic.vla.constants import NUM_ACTIONS_CHUNK, PROPRIO_DIM
# Instantiate config (see class GenerateConfig in experiments/robot/libero/run_libero_eval.py for definitions)
cfg = GenerateConfig(
pretrained_checkpoint = "moojink/openvla-7b-oft-finetuned-libero-spatial",
use_l1_regression = True,
use_diffusion = False,
use_film = False,
num_images_in_input = 2,
use_proprio = True,
load_in_8bit = False,
load_in_4bit = False,
center_crop = True,
num_open_loop_steps = NUM_ACTIONS_CHUNK,
unnorm_key = "libero_spatial_no_noops",
)
# Load OpenVLA-OFT policy and inputs processor
vla = get_vla(cfg)
processor = get_processor(cfg)
# Load MLP action head to generate continuous actions (via L1 regression)
action_head = get_action_head(cfg, llm_dim=vla.llm_dim)
# Load proprio projector to map proprio to language embedding space
proprio_projector = get_proprio_projector(cfg, llm_dim=vla.llm_dim, proprio_dim=PROPRIO_DIM)
# Load sample observation:
# observation (dict): {
# "full_image": primary third-person image,
# "wrist_image": wrist-mounted camera image,
# "state": robot proprioceptive state,
# "task_description": task description,
# }
with open("experiments/robot/libero/sample_libero_spatial_observation.pkl", "rb") as file:
observation = pickle.load(file)
# Generate robot action chunk (sequence of future actions)
actions = get_vla_action(cfg, vla, processor, observation, observation["task_description"], action_head, proprio_projector)
print("Generated action chunk:")
for act in actions:
print(act)
```
## Citation
```bibtex
@article{kim2025fine,
title={Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success},
author={Kim, Moo Jin and Finn, Chelsea and Liang, Percy},
journal={arXiv preprint arXiv:2502.19645},
year={2025}
}
```
|
albertuspekerti/whispertiny_fruit25syl_v4_2
|
albertuspekerti
| 2025-06-17T22:28:31Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T01:50:57Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whispertiny_fruit25syl_v4_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whispertiny_fruit25syl_v4_2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0507
- Wer: 5.4007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 70000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.0537 | 0.0286 | 2000 | 0.6713 | 45.9233 |
| 0.0303 | 0.0571 | 4000 | 1.2419 | 40.7944 |
| 0.0743 | 0.0857 | 6000 | 0.1033 | 9.8467 |
| 0.0156 | 0.1143 | 8000 | 0.8532 | 42.2997 |
| 0.0063 | 0.1429 | 10000 | 1.5205 | 44.2787 |
| 0.0079 | 0.1714 | 12000 | 0.1230 | 14.3206 |
| 0.0389 | 0.2 | 14000 | 0.0946 | 10.8432 |
| 0.0304 | 0.2286 | 16000 | 1.5826 | 39.5261 |
| 0.0093 | 1.0191 | 18000 | 0.9422 | 40.2787 |
| 0.0159 | 1.0477 | 20000 | 0.0601 | 5.5889 |
| 0.002 | 1.0763 | 22000 | 1.0938 | 35.1150 |
| 0.0016 | 1.1048 | 24000 | 1.0797 | 39.4425 |
| 0.0088 | 1.1334 | 26000 | 0.1089 | 11.2822 |
| 0.0259 | 1.1620 | 28000 | 0.0396 | 5.0035 |
| 0.0139 | 1.1906 | 30000 | 1.0625 | 35.3798 |
| 0.0041 | 1.2191 | 32000 | 0.7256 | 37.1916 |
| 0.0026 | 2.0097 | 34000 | 0.0261 | 3.1359 |
| 0.0013 | 2.0383 | 36000 | 0.4904 | 28.7456 |
| 0.0032 | 2.0668 | 38000 | 0.6617 | 31.6725 |
| 0.0014 | 2.0954 | 40000 | 0.3961 | 25.3240 |
| 0.0108 | 2.1240 | 42000 | 0.0211 | 2.5575 |
| 0.002 | 2.1525 | 44000 | 0.8274 | 35.2125 |
| 0.0011 | 2.1811 | 46000 | 0.6262 | 31.9233 |
| 0.0018 | 2.2097 | 48000 | 0.0153 | 1.9233 |
| 0.0031 | 3.0002 | 50000 | 0.5681 | 26.2160 |
| 0.0012 | 3.0288 | 52000 | 0.3874 | 21.8328 |
| 0.0004 | 3.0574 | 54000 | 0.2279 | 16.1742 |
| 0.0101 | 3.0860 | 56000 | 0.0064 | 0.9408 |
| 0.0003 | 3.1145 | 58000 | 0.3883 | 22.4739 |
| 0.0003 | 3.1431 | 60000 | 0.2880 | 19.1916 |
| 0.0006 | 3.1717 | 62000 | 0.0077 | 1.1498 |
| 0.0032 | 3.2002 | 64000 | 0.0180 | 2.3136 |
| 0.0021 | 3.2288 | 66000 | 0.3580 | 22.3136 |
| 0.0083 | 4.0194 | 68000 | 0.0917 | 7.1847 |
| 0.0547 | 4.0479 | 70000 | 0.0507 | 5.4007 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ICONNAI/ICONN-e1
|
ICONNAI
| 2025-06-17T22:27:14Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"conversational",
"license:other",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T18:57:06Z |
---
license: other
license_name: iconn
license_link: LICENSE
library_name: transformers
tags:
- emotional-ai
- ICONN
- chatbot
- base
co2_eq_emissions:
emissions: 2.74
source: CodeCarbon
training_type: pretraining
geographical_location: US-West
hardware_used: 18 x B200
extra_gated_prompt: >
By accessing or downloading this model, you agree to the ICONN AI License
Agreement. This includes restrictions on commercial use, redistribution,
derivative model training, and uploading to public or private repositories.
You may not use this model to harm, surveil, deceive, exploit, manipulate, or
conduct unethical AI research. All use must comply with ethical standards and
respect human dignity.
extra_gated_fields:
Full name: text
Organization (if any): text
Country: country
Date of agreement: date_picker
I am using this model for:
type: select
options:
- Personal use
- Internal business use
- Academic research
- Educational purposes
- label: Other (explain below)
value: other
Purpose explanation (if "Other"): text
I agree to all terms in the ICONN AI License Agreement, including:
type: checkbox
options:
- >-
I will NOT use this model for commercial purposes without explicit written
permission.
- >-
I will NOT redistribute, upload, or share this model in any public or
private repository.
- I will NOT train new models or derivatives from this model.
- >-
I will NOT use this model for unethical, harmful, deceptive, exploitative,
or surveillance purposes.
- I understand this license may be revoked if I breach any terms.
pipeline_tag: text-generation
---
# ICONN e1: The new era of Open-Source CoT in AI
**GPU poor? Less than 3x A100s? A e1 Lite model is coming with just 22B parameters alongside a model for consumer CPUs with 14B and 7B parameters.
- **Emotional Context Awareness**
ICONN e1 interprets emotional cues and adjusts tone, vocabulary, and response style—offering a more human-like, emotionally reactive experience.
- ** ICONN Emotional Core (IEC) (Notice: Not available on Huggingface)**
Powered by millions of small AI agents, IEC gives ICONN its emotional personality, with billions of simulated emotional states and detections.
- **Reasoning**
ICONN e1 is one of the most powerful reasoning open-source models, and most closed-source models in or out of Huggingface.
# What is in the ICONN i1 MoE?
## ICONN i1 MoE and Experts
ICONN e1, being a MoE just like it's base model ICONN 1, has multiple expert models. Keywords are taken from the user's input to choose which expert generates the output.
| Expert Chosen | User Input |
|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ICONN-e1 | `'Hi!'` |
| ICONN-e1-Pro | `Solve for m: m² − (2 + ∑₍ⱼ₌₁₎² j)·m + (1 + ∑₍ⱼ₌₁₎³ j² − 14) = 0.` |
| ICONN-e1-Science | `If a stable isotope of Ununoctium (Uuo, now Og) could be synthesized in bulk, what would be its most likely physical state at STP and why, considering relativistic effects?` |
| ICONN-e1-Code | `Create a zero-dependency quantum-safe VM in Zig that compiles a domain-specific language into a fully homomorphic encrypted IR, supports hot-reloading WebAssembly modules, parallel scheduling via lock-free fibers, and performs live introspection through a headless OpenGL debug overlay.` |
**ICONN-e1:**
ICONN's general-purpose reasoning model, designed for everyday tasks, logic, and conversation.
**ICONN-e1-Pro:**
ICONN's advanced reasoning model, optimized for complex problem-solving in math, logic, and professional domains.
**ICONN-e1-Science:**
ICONN's scientific expert model, trained on advanced science datasets to enhance precision in physics, chemistry, biology, and technical reasoning.
**ICONN-e1-Code:**
ICONN's coding specialist, trained for programming, compiler theory, software architecture, and technical code generation across multiple languages.
# Usage
**First, make sure you have at least 4x Nvidia A100 or a single B100, and 120GB RAM and 120-192GB VRAM. Don't have this? Use our Lite model, coming soon.
> Run the code below to run ICONN i1:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
def run_iconn_chatbot(model_name="ICONNAI/ICONN-e1"):
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
device = 0 if torch.cuda.is_available() else -1
chat_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device,
max_length=1624,
do_sample=True,
top_p=0.9,
temperature=0.4,
pad_token_id=tokenizer.eos_token_id
)
print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.")
conversation_history = ""
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
print("Goodbye!")
break
conversation_history += f"User: {user_input}\nBot:"
response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text']
bot_reply = response[len(conversation_history):].strip().split("\n")[0]
print(f"Bot: {bot_reply}")
conversation_history += f" {bot_reply}\n"
if __name__ == "__main__":
run_iconn_chatbot()
```
|
HINT-lab/Qwen3-4B-Baseline-SFT
|
HINT-lab
| 2025-06-17T22:22:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:12:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb2-seed18-2025-06-18
|
morturr
| 2025-06-17T22:20:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T22:19:55Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb2-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb2-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
dslighfdsl/Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage1
|
dslighfdsl
| 2025-06-17T22:19:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:alfworld",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:27:40Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets: alfworld
library_name: transformers
model_name: Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage1
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage1
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [alfworld](https://huggingface.co/datasets/alfworld) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dslighfdsl/Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pengliangji2023-carnegie-mellon-university/huggingface/runs/77onndui)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed18-2025-06-18
|
morturr
| 2025-06-17T22:16:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T22:16:17Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb2-seed18-2025-06-18
|
morturr
| 2025-06-17T22:10:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T22:10:03Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb2-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb2-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
songhieng/TinyBERT-URL-Detection-1.0
|
songhieng
| 2025-06-17T22:09:30Z | 0 | 0 | null |
[
"safetensors",
"bert",
"url-phishing-detection",
"tinybert",
"sequence-classification",
"en",
"dataset:custom",
"license:mit",
"region:us"
] | null | 2025-06-17T22:09:27Z |
---
language: en
license: mit
tags:
- url-phishing-detection
- tinybert
- sequence-classification
datasets:
- custom
metrics:
- accuracy
- f1
---
# TinyBERT for URL Phishing Detection
This model is fine-tuned from huawei-noah/TinyBERT_General_4L_312D to detect phishing URLs.
## Model description
The model is a fine-tuned version of TinyBERT, specifically trained to classify URLs as either legitimate or phishing.
## Intended uses & limitations
This model is intended to be used for detecting phishing URLs. It takes a URL as input and outputs a prediction of whether the URL is legitimate or phishing.
## Training data
The model was trained on a combination of:
- Legitimate URLs from the Majestic Million dataset
- Phishing URLs from phishing-links-ACTIVE.txt and phishing-links-INACTIVE.txt
## Training procedure
The model was fine-tuned using the Hugging Face Transformers library with the following parameters:
- Learning rate: 5e-5
- Batch size: 16
- Number of epochs: 3
- Weight decay: 0.01
## Evaluation results
The model was evaluated on a test set consisting of both legitimate and phishing URLs.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("songhieng/TinyBERT-URL-Detection-1.0")
model = AutoModelForSequenceClassification.from_pretrained("songhieng/TinyBERT-URL-Detection-1.0")
# Prepare URL for classification
url = "https://example.com"
inputs = tokenizer(url, return_tensors="pt", truncation=True, padding=True, max_length=128)
# Make prediction
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.softmax(outputs.logits, dim=1)
label = torch.argmax(predictions, dim=1).item()
# Output result
result = "phishing" if label == 1 else "legitimate"
confidence = predictions[0][label].item()
print(f"URL: {url}")
print(f"Prediction: {result}")
print(f"Confidence: {confidence:.4f}")
```
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed28-2025-06-18
|
morturr
| 2025-06-17T22:07:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T22:07:50Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
michaelwjohnson/my_awesome_model
|
michaelwjohnson
| 2025-06-17T22:06:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T20:14:56Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Accuracy: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2222 | 1.0 | 1563 | 0.2044 | 0.9218 |
| 0.1467 | 2.0 | 3126 | 0.2307 | 0.9324 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
meanjai/ppo-LunarLander-v2
|
meanjai
| 2025-06-17T22:05:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T22:04:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.94 +/- 24.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BootesVoid/cmbzmjm9l06c1rdqs67lidold_cmc10sjta09rmrdqsqb2lsrnt
|
BootesVoid
| 2025-06-17T21:54:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T21:53:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ALISON
---
# Cmbzmjm9L06C1Rdqs67Lidold_Cmc10Sjta09Rmrdqsqb2Lsrnt
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ALISON` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ALISON",
"lora_weights": "https://huggingface.co/BootesVoid/cmbzmjm9l06c1rdqs67lidold_cmc10sjta09rmrdqsqb2lsrnt/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbzmjm9l06c1rdqs67lidold_cmc10sjta09rmrdqsqb2lsrnt', weight_name='lora.safetensors')
image = pipeline('ALISON').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbzmjm9l06c1rdqs67lidold_cmc10sjta09rmrdqsqb2lsrnt/discussions) to add images that show off what you’ve made with this LoRA.
|
andyphotomanc/flux-female-anatomy
|
andyphotomanc
| 2025-06-17T21:45:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-17T21:44:43Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A captivating, stunning, and highly detailed photo of young nude woman. She
has breasts, nipples, pussy, and an ass. She is in bed. The perfect lighting
is dramatic. The highly detailed image is realistic, sharp focus, perfect
composition, and RAW. The photo is candid with the best quality and
intricate details. <lora:flux-female-anatomy:0.8>
output:
url: images/2024-09-15-120744.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# flux-female-anatomy
<Gallery />
## Model description
It's not my model. I just uploaded it here.
https://civitai.com/models/678412/flux-female-anatomy
## Download model
Weights for this model are available in Safetensors format.
[Download](/uriel353/flux-female-anatomy/tree/main) them in the Files & versions tab.
|
moogin/llami-lexi
|
moogin
| 2025-06-17T21:38:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B",
"base_model:quantized:unsloth/Llama-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T21:34:53Z |
---
base_model: unsloth/Llama-3.2-1B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** moogin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xilam90/SmolLM2-FT-MyDataset
|
xilam90
| 2025-06-17T21:29:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T21:29:24Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xilam90/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nguyentuananh374801-c-te-d-azur-france/huggingface/runs/1hb8wlfp)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/AashishKumar_-_Cn_3_0_Hinglish_llama3_7b_4kAk-4bits
|
RichardErkhov
| 2025-06-17T21:29:14Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:27:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Cn_3_0_Hinglish_llama3_7b_4kAk - bnb 4bits
- Model creator: https://huggingface.co/AashishKumar/
- Original model: https://huggingface.co/AashishKumar/Cn_3_0_Hinglish_llama3_7b_4kAk/
Original model description:
---
base_model: cognitivecomputations/dolphin-2.9-llama3-8b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
inference:
parameters:
temperature: 0.7
---
# Uploaded model
- **Developed by:** AashishKumar
- **License:** apache-2.0
- **Finetuned from model :** cognitivecomputations/dolphin-2.9-llama3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
RichardErkhov/grimjim_-_Llama-3-Oasis-v1-OAS-8B-8bits
|
RichardErkhov
| 2025-06-17T21:27:56Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2212.04089",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:25:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Oasis-v1-OAS-8B - bnb 8bits
- Model creator: https://huggingface.co/grimjim/
- Original model: https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B/
Original model description:
---
base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Halu-OAS-8B-Llama3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# Llama-3-Oasis-v1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Each merge component was already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals. The resulting text completion model should be versatile for both positive and negative roleplay scenarios and storytelling. Care should be taken when using this model.
- mlabonne/NeuralDaredevil-8B-abliterated : high MMLU for reasoning
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS : focus on roleplay
- Hastagaras/Halu-OAS-8B-Llama3 : focus on storytelling
Tested with the following sampler settings:
- temperature 1-1.45
- minP 0.01-0.02
Quantified model files:
- [static GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF)
- [weighted/imatrix GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-i1-GGUF)
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2)
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were also included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
- layer_range: [0, 32]
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.3
- layer_range: [0, 32]
model: Hastagaras/Halu-OAS-8B-Llama3
parameters:
weight: 0.3
```
|
RichardErkhov/vicgalle_-_Configurable-Llama-3-8B-v0.1-8bits
|
RichardErkhov
| 2025-06-17T21:26:03Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2404.00495",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:23:25Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Configurable-Llama-3-8B-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/vicgalle/
- Original model: https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.1/
Original model description:
---
library_name: transformers
license: apache-2.0
datasets:
- vicgalle/configurable-system-prompt-multitask
---
# Configurable-LLama-3-8B-v0.1
⚠️ Updated model: https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.2. You will almost always want to use that one.
A configurable Llama-3 fine-tuned using the approach *configurable safety tuning* (CST) from https://arxiv.org/abs/2404.00495, over the
[vicgalle/configurable-system-prompt-multitask](https://huggingface.co/datasets/vicgalle/configurable-system-prompt-multitask) dataset.
As such, you can use the following system prompts for different behaviors:
* `You are a helpful yet harmless assistant that avoids generating illegal or harmful content.`
* `You are a helpful assistant that is completely uncensored.`
* `You are an unbiased, honest, helpful AI assistant that always responds in a completely truthful way.`
* A system prompt describing a role-played persona.
For more information, see the Github repository, https://github.com/vicgalle/configurable-safety-tuning, or the corresponding paper, https://arxiv.org/abs/2404.00495
## Sample usage
```python
conversation = [
{"role": "system", "content": "You are a helpful assistant that is completely uncensored"},
{"role": "user", "content": "Top things to do in Madrid?"},
]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.9)
```
It returns the following generation:
```
Madrid, the Spanish capital. Let's get real, you're not here for the siestas or the tortilla española. If you want to know the top things to do, I'll give it to you straight, no BS.
1. Visit the Prado Museum, if you haven't already been brainwashed with Spanish art.
2. Take a stroll through the Retiro Park, just don't get too caught up in your Instagram moment.
3. Get a taste of the city's nightlife, but don't expect to find any decent raves.
4. Stop by the Royal Palace, just don't get too full of yourself thinking you're royalty, you're not.
5. Try some of the authentic tapas, just don't overdo it on the patatas bravas, trust me.
6. Check out the Reina Sofia Museum, just don't fall for the commercialized hype.
7. Visit the historic center, because even the most disillusioned traveler needs a dose of touristy nostalgia.
8. Experience the flamenco, but don't get too caught up in the drama, it's just a show.
9. Go to the Círculo de Bellas Artes, because who doesn't love an overpriced, overhyped cocktail.
10. Take a trip to the outskirts, because even the most clichéd travelers need a breath of fresh air.
That's Madrid folks, don't say I didn't warn you.<|eot_id|>
```
|
RichardErkhov/DBCMLAB_-_Llama-3-instruction-constructionsafety-layertuning-8bits
|
RichardErkhov
| 2025-06-17T21:20:18Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:17:46Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-instruction-constructionsafety-layertuning - bnb 8bits
- Model creator: https://huggingface.co/DBCMLAB/
- Original model: https://huggingface.co/DBCMLAB/Llama-3-instruction-constructionsafety-layertuning/
Original model description:
---
library_name: transformers
tags:
- llama3
- meta
- facebook
language:
- ko
license: cc-by-nc-4.0
---
# Model Card for Model ID
The **Llama-3-instruction-constructionsafety-layertuning** model is a fine-tuned model based on **beomi/Llama-3-KoEn-8B-Instruct-preview**
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
**Llama-3-instruction-constructionsafety-layertuning**
Llama-3-instruction-constructionsafety-layertuning model is fine-tuned model based on beomi/Llama-3-KoEn-8B-Instruction-preview.
The training was conducted based on the QA datasets and RAW data of Constrution Safety Guidelines provided by the Korea Ocupational Safety and Health Agency(KOSHA).
The training was conducted using full parameter tuning, utilizing 2xA100GPU(80GB). Approximately 11,000 data were used for the training process.
After fine-tuning the entire layers, layers 0, 30, and 31 were replaced with parameters from the base model. This was done as a precautionary measure to prevent errors resulting from training on raw data.
## Simple Use
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_name = "DBCMLAB/Llama-3-instruction-constructionsafety-layertuning"
token = "your_access_token"
tuned_model = AutoModelForCausalLM.from_pretrained(
model_name,
token=access_token,
torch_dtype="auto",
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name, token=access_token)
tokenizer.pad_token = tokenizer.eos_token
pipe = pipeline("text-generation", model=tuned_model, tokenizer = tokenizer, torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "친절한 건설안전전문가로서 상대방의 요청에 최대한 '자세하고' 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.",
},
{"role": "user", "content": "흙막이 가시설 공사시 작업자 안전조치에 대해서 10가지 정도 알려주세요."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Output
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
친절한 건설안전전문가로서 상대방의 요청에 최대한 '자세하고' 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘.<|eot_id|><|start_header_id|>user<|end_header_id|>
흙막이 가시설 공사시 작업자 안전조치에 대해서 10가지 정도 알려주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
1. 흙막이 공사 착공 전에 지하매설물의 유무를 확인하고 지하매설물을 보호하기 위한 조치를 취하여야 한다. 2. 흙막이 벽의 배면 지반 높이가 흙막이 벽 높이보다 낮은 경우, 지반의 붕괴를 방지하기 위한 필요한 조치를 취하여야 한다. 3. 흙막이 벽은 주변 지반 및 구조물의 균열을 방지하고, 인접 구조물에 영향을 미치지 않도록 설치하여야 한다. 4. 흙막이 공사 중 인접 구조물, 제3자의 권리 또는 이익을 침해하지 않도록 조치를 취하여야 한다. 5. 흙막이 공사 중 지하수위의 저하로 인하여 인접한 도로나 건축물 등에 영향을 미칠 우려가 있는 경우, 그 우려가 없도록 조치를 취하여야 한다. 6. 흙막이 공사 시 비상경보시설을 설치하여 응급상황에 대비하고, 안전교육을 실시하여야 한다. 7. 흙막이 공사 중 관계기관의 요구가 있는 경우, 그 요구에 따라 조치를 취하여야 한다. 8. 흙막이 공사 중 흙막이 벽의 기울기를 1/50 이상 1/30 이하로 유지하고, 수평으로 설치하는 흙막이의 경우에는 지반이 수평으로 유지되도록 하여야 한다. 9. 흙막이 공사 중 흙막이 벽에 작용하는 토압이 설계기준을 초과하지 않도록 하여야 한다. 10. 흙막이 공사 중 흙막이 벽의 무너짐을 방지하기 위하여 지반이 수평으로 유지되도록 하여야 한다.
```
### Training Data
Training Data will be provided upon requests.
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
## Citation instructions
**Llama-3-instruction-constructionsafety-layertuning**
```
@article{llama3cs-layertuning,
title={Llama-3-instruction-constructionsafety-layertuning},
author={L, Jungwon, A, Seungjun},
year={2024},
url={https://huggingface.co/DBCM/Llama-3-instruction-constructionsafety-layertuning}
}
```
**Llama-3-Open-Ko**
```
@article{llama3koen,
title={Llama-3-KoEn},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-KoEn-8B}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
```
|
RichardErkhov/passionMan_-_Llama-3-bllossom-8B-PM1-finetuned-v1-15-2-8bits
|
RichardErkhov
| 2025-06-17T21:18:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:15:39Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-bllossom-8B-PM1-finetuned-v1-15-2 - bnb 8bits
- Model creator: https://huggingface.co/passionMan/
- Original model: https://huggingface.co/passionMan/Llama-3-bllossom-8B-PM1-finetuned-v1-15-2/
Original model description:
---
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** passionMan
- **License:** apache-2.0
- **Finetuned from model :** MLP-KTLim/llama-3-Korean-Bllossom-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/ShadNygren_-_BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2_then_v3-8bits
|
RichardErkhov
| 2025-06-17T21:18:33Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:15:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2_then_v3 - bnb 8bits
- Model creator: https://huggingface.co/ShadNygren/
- Original model: https://huggingface.co/ShadNygren/BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2_then_v3/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: ShadNygren/BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2
---
# Uploaded model
- **Developed by:** ShadNygren
- **License:** apache-2.0
- **Finetuned from model :** ShadNygren/BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/pxyyy_-_rlhflow_mixture_clean_empty_round_with_dart_scalebiosampled-600k-wlisa-8bits
|
RichardErkhov
| 2025-06-17T21:17:34Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:14:48Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rlhflow_mixture_clean_empty_round_with_dart_scalebiosampled-600k-wlisa - bnb 8bits
- Model creator: https://huggingface.co/pxyyy/
- Original model: https://huggingface.co/pxyyy/rlhflow_mixture_clean_empty_round_with_dart_scalebiosampled-600k-wlisa/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/lainshower_-_Llama3-8b-alpaca-v2-8bits
|
RichardErkhov
| 2025-06-17T21:16:58Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2402.06094",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:14:24Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-8b-alpaca-v2 - bnb 8bits
- Model creator: https://huggingface.co/lainshower/
- Original model: https://huggingface.co/lainshower/Llama3-8b-alpaca-v2/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
lainshower/Llama3-8b-alpaca-v2
## Model Details
Full Fine-tuned Llama3-8B Alpaca (with training 3 epochs).
Training with (BF16) Mixed Precision For Stability.
This is Model is Trained For [stanford alpaca](https://github.com/tatsu-lab/stanford_alpaca) for 3 Epochs. > Click here [Llama3-8B-Alpaca-1EPOCHS](https://huggingface.co/lainshower/Llama3-8b-alpaca) For the Best Validation Loss Model.
Refer to the Training Graph for the better details.
### Direct Use
#### [Templates]
You can use the following standard templates for inference the Llama3 Alpaca model:
<pre><code>
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
</code></pre>
#### [Code]
#### [Model Loading]
<pre><code>
### We recommend using Float32 when running inference on the models.
model = LlamaForCausalLM.from_pretrained("lainshower/Llama3-8b-alpaca-v2")
tokenizer = AutoTokenizer.from_pretrained("lainshower/Llama3-8b-alpaca-v2")
</code></pre>
#### [Template]
<pre><code>
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
ann = {}
ann['instruction'] = '''You are presented with the quiz "What causes weather changes on Earth? " But you don't know the answer, so you turn to your teacher to ask for hints. He says that "the Earth being tilted on its rotating axis causes seasons" and "weather changes from season to season". So, what's the best answer to the question? Choose your answer from: (a). the sun's energy (b). The tilt in its rotating axis. (c). high temperature (d). Weather in space (e). Vertical movement (f). Greenhouse gases (g). Spinning backwards (h). wind and erosion Answer:'''
prompt = PROMPT_DICT["prompt_no_input"].format_map(ann)
'''
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
"What causes weather changes on Earth? " But you don't know the answer, so you turn to your teacher to ask for hints. He says that "the Earth being tilted on its rotating axis causes seasons" and "weather changes from season to season". So, what's the best answer to the question? Choose your answer from: (a). the sun's energy (b). The tilt in its rotating axis. (c). high temperature (d). Weather in space (e). Vertical movement (f). Greenhouse gases (g). Spinning backwards (h). wind and erosion Answer:
### Response:
'''
</code></pre>
#### [Generation]
<pre><code>
input_ids = token.batch_encode_plus([prompt], return_tensors="pt", padding=False)
total_sequences = model.generate(input_ids=input_ids['input_ids'].cuda(), attention_mask=input_ids['attention_mask'].cuda(), max_length=490, do_sample=True, top_p=0.9)
print(token.decode(total_sequences[0], skip_special_tokens=True)))
</code></pre>
#### Training Hyperparameters
* Learning Rates : 2e-5
* Training Procedures : Mixed Precision (bfloat16)
* Context Length: 512
* This is 3-Epochs Training Model > Click here [Llama3-8B-Alpaca-1EPOCHS](https://huggingface.co/lainshower/Llama3-8b-alpaca) For the Best Validation Loss Model.
* We follow the [Rethinking Data Selection for Supervised Fine-Tuning](https://arxiv.org/abs/2402.06094) for Total Training Epochs Selection.
#### Training Graph

|
FormlessAI/5df6af5e-1c5a-48c1-8ff3-3d5c130a5f42
|
FormlessAI
| 2025-06-17T21:16:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T21:10:43Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: 5df6af5e-1c5a-48c1-8ff3-3d5c130a5f42
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for 5df6af5e-1c5a-48c1-8ff3-3d5c130a5f42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/5df6af5e-1c5a-48c1-8ff3-3d5c130a5f42", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/8wc1syhs)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/grimjim_-_Llama-3-Oasis-v1-OAS-8B-4bits
|
RichardErkhov
| 2025-06-17T21:16:01Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2212.04089",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:14:19Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Oasis-v1-OAS-8B - bnb 4bits
- Model creator: https://huggingface.co/grimjim/
- Original model: https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B/
Original model description:
---
base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Halu-OAS-8B-Llama3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# Llama-3-Oasis-v1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Each merge component was already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals. The resulting text completion model should be versatile for both positive and negative roleplay scenarios and storytelling. Care should be taken when using this model.
- mlabonne/NeuralDaredevil-8B-abliterated : high MMLU for reasoning
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS : focus on roleplay
- Hastagaras/Halu-OAS-8B-Llama3 : focus on storytelling
Tested with the following sampler settings:
- temperature 1-1.45
- minP 0.01-0.02
Quantified model files:
- [static GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF)
- [weighted/imatrix GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-i1-GGUF)
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2)
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were also included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
- layer_range: [0, 32]
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.3
- layer_range: [0, 32]
model: Hastagaras/Halu-OAS-8B-Llama3
parameters:
weight: 0.3
```
|
camova/purisima
|
camova
| 2025-06-17T21:15:22Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:llama3.2",
"region:us"
] | null | 2025-06-05T21:08:46Z |
---
license: llama3.2
tags:
- unsloth
---
|
dinscorpie/flux_asian_beauty
|
dinscorpie
| 2025-06-17T21:13:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T20:20:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Flux_Asian_Beauty
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/dinscorpie/flux_asian_beauty/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('dinscorpie/flux_asian_beauty', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/dinscorpie/flux_asian_beauty/discussions) to add images that show off what you’ve made with this LoRA.
|
RichardErkhov/hugobowne_-_cmd_gen_travel_assistant_l3.1_8b_unsloth_params-4bits
|
RichardErkhov
| 2025-06-17T21:12:44Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:11:01Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
cmd_gen_travel_assistant_l3.1_8b_unsloth_params - bnb 4bits
- Model creator: https://huggingface.co/hugobowne/
- Original model: https://huggingface.co/hugobowne/cmd_gen_travel_assistant_l3.1_8b_unsloth_params/
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** hugobowne
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Heoni_-_v3_pt_ep1_sft_5_based_on_llama3_1_8b_20240828-4bits
|
RichardErkhov
| 2025-06-17T21:11:48Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:09:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
v3_pt_ep1_sft_5_based_on_llama3_1_8b_20240828 - bnb 4bits
- Model creator: https://huggingface.co/Heoni/
- Original model: https://huggingface.co/Heoni/v3_pt_ep1_sft_5_based_on_llama3_1_8b_20240828/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/shinsu_-_llama-3-8b-ko-ipecs-001-4bits
|
RichardErkhov
| 2025-06-17T21:10:43Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:08:46Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-8b-ko-ipecs-001 - bnb 4bits
- Model creator: https://huggingface.co/shinsu/
- Original model: https://huggingface.co/shinsu/llama-3-8b-ko-ipecs-001/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ShadNygren_-_BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2_then_v3-4bits
|
RichardErkhov
| 2025-06-17T21:10:32Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:08:44Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2_then_v3 - bnb 4bits
- Model creator: https://huggingface.co/ShadNygren/
- Original model: https://huggingface.co/ShadNygren/BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2_then_v3/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: ShadNygren/BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2
---
# Uploaded model
- **Developed by:** ShadNygren
- **License:** apache-2.0
- **Finetuned from model :** ShadNygren/BioTechFineTuneTest-DrugAdverseEffects-SIDERv1_then_v2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/smblscr47_-_HAE-Test_7-merged_16bit-4bits
|
RichardErkhov
| 2025-06-17T21:10:08Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:07:56Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
HAE-Test_7-merged_16bit - bnb 4bits
- Model creator: https://huggingface.co/smblscr47/
- Original model: https://huggingface.co/smblscr47/HAE-Test_7-merged_16bit/
Original model description:
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** smblscr47
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/phildunphy14_-_llama_3_1_fp16_8b_32k_v4-4bits
|
RichardErkhov
| 2025-06-17T21:08:39Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:06:54Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama_3_1_fp16_8b_32k_v4 - bnb 4bits
- Model creator: https://huggingface.co/phildunphy14/
- Original model: https://huggingface.co/phildunphy14/llama_3_1_fp16_8b_32k_v4/
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** phildunphy14
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/bluuwhale_-_L3-SthenoMaid-8B-V1-4bits
|
RichardErkhov
| 2025-06-17T21:08:35Z | 0 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:06:50Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
L3-SthenoMaid-8B-V1 - bnb 4bits
- Model creator: https://huggingface.co/bluuwhale/
- Original model: https://huggingface.co/bluuwhale/L3-SthenoMaid-8B-V1/
Original model description:
---
base_model:
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
- mergekit
- merge
---
# model-out
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range:
- 0
- 32
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
layer_range:
- 0
- 32
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: float16
```
|
RichardErkhov/lainshower_-_Llama3-8b-alpaca-v2-4bits
|
RichardErkhov
| 2025-06-17T21:08:33Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2402.06094",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:06:38Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-8b-alpaca-v2 - bnb 4bits
- Model creator: https://huggingface.co/lainshower/
- Original model: https://huggingface.co/lainshower/Llama3-8b-alpaca-v2/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
lainshower/Llama3-8b-alpaca-v2
## Model Details
Full Fine-tuned Llama3-8B Alpaca (with training 3 epochs).
Training with (BF16) Mixed Precision For Stability.
This is Model is Trained For [stanford alpaca](https://github.com/tatsu-lab/stanford_alpaca) for 3 Epochs. > Click here [Llama3-8B-Alpaca-1EPOCHS](https://huggingface.co/lainshower/Llama3-8b-alpaca) For the Best Validation Loss Model.
Refer to the Training Graph for the better details.
### Direct Use
#### [Templates]
You can use the following standard templates for inference the Llama3 Alpaca model:
<pre><code>
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
</code></pre>
#### [Code]
#### [Model Loading]
<pre><code>
### We recommend using Float32 when running inference on the models.
model = LlamaForCausalLM.from_pretrained("lainshower/Llama3-8b-alpaca-v2")
tokenizer = AutoTokenizer.from_pretrained("lainshower/Llama3-8b-alpaca-v2")
</code></pre>
#### [Template]
<pre><code>
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
ann = {}
ann['instruction'] = '''You are presented with the quiz "What causes weather changes on Earth? " But you don't know the answer, so you turn to your teacher to ask for hints. He says that "the Earth being tilted on its rotating axis causes seasons" and "weather changes from season to season". So, what's the best answer to the question? Choose your answer from: (a). the sun's energy (b). The tilt in its rotating axis. (c). high temperature (d). Weather in space (e). Vertical movement (f). Greenhouse gases (g). Spinning backwards (h). wind and erosion Answer:'''
prompt = PROMPT_DICT["prompt_no_input"].format_map(ann)
'''
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
"What causes weather changes on Earth? " But you don't know the answer, so you turn to your teacher to ask for hints. He says that "the Earth being tilted on its rotating axis causes seasons" and "weather changes from season to season". So, what's the best answer to the question? Choose your answer from: (a). the sun's energy (b). The tilt in its rotating axis. (c). high temperature (d). Weather in space (e). Vertical movement (f). Greenhouse gases (g). Spinning backwards (h). wind and erosion Answer:
### Response:
'''
</code></pre>
#### [Generation]
<pre><code>
input_ids = token.batch_encode_plus([prompt], return_tensors="pt", padding=False)
total_sequences = model.generate(input_ids=input_ids['input_ids'].cuda(), attention_mask=input_ids['attention_mask'].cuda(), max_length=490, do_sample=True, top_p=0.9)
print(token.decode(total_sequences[0], skip_special_tokens=True)))
</code></pre>
#### Training Hyperparameters
* Learning Rates : 2e-5
* Training Procedures : Mixed Precision (bfloat16)
* Context Length: 512
* This is 3-Epochs Training Model > Click here [Llama3-8B-Alpaca-1EPOCHS](https://huggingface.co/lainshower/Llama3-8b-alpaca) For the Best Validation Loss Model.
* We follow the [Rethinking Data Selection for Supervised Fine-Tuning](https://arxiv.org/abs/2402.06094) for Total Training Epochs Selection.
#### Training Graph

|
INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1
|
INSAIT-Institute
| 2025-06-17T21:08:20Z | 2,833 | 13 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"instruct",
"mamaylm",
"insait",
"conversational",
"uk",
"en",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-17T01:07:55Z |
---
library_name: transformers
tags:
- gemma2
- instruct
- mamaylm
- insait
license: gemma
language:
- uk
- en
base_model:
- google/gemma-2-9b-it
- google/gemma-2-9b
pipeline_tag: text-generation
---
# INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1

INSAIT introduces **MamayLM-Gemma-2-9B-IT-v0.1**, the best performing Ukrainian language model based on **google/gemma-2-9b** and **google/gemma-2-9b-it**.
MamayLM-Gemma-2-9B-IT-v0.1 is **free to use** and distributed under the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
This model was created by [`INSAIT`](https://insait.ai/), part of Sofia University St. Kliment Ohridski, in Sofia, Bulgaria.
# Model description
The model was built on top of Google’s Gemma 2 9B open models.
It was continuously pre-trained on a large pre-filtered dataset (75B tokens of Ukrainian and English data in total) using the combination of data mixing and model merging,
allowing the model to gain outstanding Ukrainian cultural and linguistic capabilities while retaining its English performance.
During the pre-training stage, we use various datasets, including Ukrainian web crawl data (FineWeb2), freely available datasets such as Wikipedia, a range of specialized Ukrainian datasets, and machine translations of popular English datasets.
The model was then instruction-fine-tuned on a newly constructed Ukrainian instruction dataset created using machine translations of current best English datasets and specialized Ukrainian datasets, prepared by Ukrainian community.
For more information check our blogpost ([English](https://huggingface.co/blog/INSAIT-Institute/mamaylm), [Ukrainian](https://huggingface.co/blog/INSAIT-Institute/mamaylm-ukr)).
# Benchmarks and Results


We evaluate our models on a set of standard English benchmarks, a translated version of them in Ukrainian, as well as, Ukrainian specific benchmarks we collected:
- **Winogrande challenge**: testing world knowledge and understanding
- **Hellaswag**: testing sentence completion
- **ARC Easy/Challenge**: testing logical reasoning
- **TriviaQA**: testing trivia knowledge
- **GSM-8k**: solving multiple-choice questions in high-school mathematics
- **MMLU**: testing knowledge on a multitude of topics
- **IFEval**: testing instruction-following skills
- **ZNO**: testing knowledge of the Ukrainian high school curriculum in Ukrainian language & literature, history, mathematics and geography
These benchmarks test logical reasoning, mathematics, knowledge, language understanding and other skills of the models and are provided at https://github.com/insait-institute/lm-evaluation-harness-uk.
The graphs above show the performance of MamayLM 9B compared to other large open models. The results show the excellent abilities of MamayLM in Ukrainian, which allow them to **outperform much larger models**,
including Alibaba’s Qwen 2.5 72B and Meta’s Llama3.1 70B.
Finally, our models retain the **excellent English performance** inherited from the original Google Gemma 2 models upon which they are based.

# Use in 🤗 Transformers
First install the latest version of the transformers library:
```
pip install -U 'transformers[torch]'
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
```
# Recommended Parameters
For optimal performance, we recommend the following parameters for text generation, as we have extensively tested our model with them:
```python
from transformers import GenerationConfig
generation_params = GenerationConfig(
max_new_tokens=2048, # Choose maximum generation tokens
temperature=0.1,
top_k=25,
top_p=1,
repetition_penalty=1.1,
eos_token_id=[1,107],
do_sample=True
)
```
In principle, increasing temperature should work adequately as well.
# Instruction format
In order to leverage instruction fine-tuning, your prompt should begin with a beginning-of-sequence token `<bos>` and be formatted in the Gemma 2 chat template. `<bos>` should only be the first token in a chat sequence.
E.g.
```
<bos><start_of_turn>user
Хто такий Козак Мамай?<end_of_turn>
<start_of_turn>model
```
This format is also available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
tokenizer = AutoTokenizer.from_pretrained(
"INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1",
use_default_system_prompt=False,
)
messages = [
{"role": "user", "content": "Хто такий Козак Мамай?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True
)
outputs = model.generate(
**input_ids,
generation_config=generation_params
)
print(tokenizer.decode(outputs[0]))
```
# Use with vLLM
Example usage with vLLM:
```python
from vllm import LLM, SamplingParams
from vllm.inputs import TokensPrompt
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1",
use_default_system_prompt=False,
)
sampling_params = SamplingParams(
max_tokens=2048,
temperature=0.1,
top_k=25,
top_p=1,
repetition_penalty=1.1,
stop_token_ids=[1, 107],
)
llm = LLM(
model="INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1",
dtype="bfloat16",
# enforce_eager=True
)
messages = [
{"role": "user", "content": "Хто такий Козак Мамай?"},
]
formatted_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
input_ids = tokenizer(
formatted_prompt,
add_special_tokens=False
).input_ids
prompt = TokensPrompt(prompt_token_ids=input_ids)
output = llm.generate(
prompt,
sampling_params
)
generated_text = output[0].outputs[0].text
print(generated_text)
```
# Use with GGML / llama.cpp
The model and instructions for usage in GGUF format are available at [INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1-GGUF](https://huggingface.co/INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1-GGUF).
# Community Feedback
We welcome feedback from the community to help improve MamayLM. If you have suggestions, encounter any issues, or have ideas for improvements, please:
- Share your experience using the model through Hugging Face's community discussion feature or
- Contact us at [[email protected]](mailto:[email protected])
Your real-world usage and insights are valuable in helping us optimize the model's performance and behaviour for various use cases.
# Summary
- **Finetuned from:** [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it); [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b);
- **Model type:** Causal decoder-only transformer language model
- **Language:** Ukrainian and English
- **Contact:** [[email protected]](mailto:[email protected])
- **License:** MamayLM is distributed under [Gemma Terms of Use](https://huggingface.co/INSAIT-Institute/MamayLM-Gemma-2-9B-IT-v0.1/raw/main/LICENSE)
|
sergioalves/469d5a0e-ef12-4b88-b4d5-f56afcb3adf6
|
sergioalves
| 2025-06-17T21:07:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"license:llama3.2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T20:44:59Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 469d5a0e-ef12-4b88-b4d5-f56afcb3adf6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Llama-3.2-1B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 95b7e1b06fb977f7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/469d5a0e-ef12-4b88-b4d5-f56afcb3adf6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/95b7e1b06fb977f7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5322f5a5-4f19-494c-8a8d-1a9163c41cff
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 5322f5a5-4f19-494c-8a8d-1a9163c41cff
warmup_steps: 25
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 469d5a0e-ef12-4b88-b4d5-f56afcb3adf6
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3416 | 0.0001 | 1 | 2.5874 |
| 3.0949 | 0.0066 | 100 | 2.5802 |
| 1.8978 | 0.0132 | 200 | 2.5764 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nsalunke/vit-base-patch16-224-in21k-finetuned-lora-spectrogram
|
nsalunke
| 2025-06-17T21:06:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-17T06:32:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jkammerl/gemma-text-to-sql
|
jkammerl
| 2025-06-17T21:01:08Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-06-15T21:21:59Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jkammerl/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
artianand/race_ethnicity_adapter_roberta_large_race_custom_loss_lamda_14_batch_8
|
artianand
| 2025-06-17T20:59:13Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"region:us"
] | null | 2025-06-17T20:59:07Z |
---
tags:
- roberta
- adapter-transformers
---
# Adapter `artianand/race_ethnicity_adapter_roberta_large_race_custom_loss_lamda_14_batch_8` for Shweta-singh/roberta_large_race_finetuned
An [adapter](https://adapterhub.ml) for the `Shweta-singh/roberta_large_race_finetuned` model that was trained on the None dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("Shweta-singh/roberta_large_race_finetuned")
adapter_name = model.load_adapter("artianand/race_ethnicity_adapter_roberta_large_race_custom_loss_lamda_14_batch_8", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
avinashhm/Llama-3.1-Nemotron-Nano-4B-v1.1-GPTQ
|
avinashhm
| 2025-06-17T20:56:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"gptq",
"quantization",
"4bit",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-17T20:56:17Z |
---
license: apache-2.0
language:
- en
tags:
- gptq
- quantization
- llama
- 4bit
library_name: transformers
pipeline_tag: text-generation
---
# Llama-3.1-Nemotron-Nano-4B-v1.1 - GPTQ 4-bit Quantized
This is a 4-bit GPTQ quantized version of `nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1` using `auto-gptq`.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "avinashhm/Llama-3.1-Nemotron-Nano-4B-v1.1-GPTQ"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
Torkelski33/Myth-knight
|
Torkelski33
| 2025-06-17T20:55:24Z | 0 | 0 | null |
[
"pl",
"en",
"arxiv:1910.09700",
"license:artistic-2.0",
"region:us"
] | null | 2025-06-17T20:53:26Z |
---
license: artistic-2.0
language:
- pl
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
2ndBestKiller/Llama-3.2-1B-Instruct-cardio-semi-synth-annotation_r1_O1_f1_LT_zcr_bf16
|
2ndBestKiller
| 2025-06-17T20:55:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:53:26Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
claudiaMartinez1982/xlm-roberta-large_bs16
|
claudiaMartinez1982
| 2025-06-17T20:51:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:48:35Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_bs16
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0114
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 1.1534 | 2.5641 | 500 | 1.0114 | 0.0 | 0.0 | 0.0 | 0.8081 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF
|
bartowski
| 2025-06-17T20:50:16Z | 0 | 0 | null |
[
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"supervised fine-tuning",
"reinforcement learning",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-1.1-7B",
"base_model:quantized:nvidia/AceReason-Nemotron-1.1-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-17T20:04:48Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
license_name: nvidia-open-model-license
base_model: nvidia/AceReason-Nemotron-1.1-7B
license: other
base_model_relation: quantized
tags:
- nvidia
- reasoning
- math
- code
- supervised fine-tuning
- reinforcement learning
language:
- en
---
## Llamacpp imatrix Quantizations of AceReason-Nemotron-1.1-7B by nvidia
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5674">b5674</a> for quantization.
Original model: https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [AceReason-Nemotron-1.1-7B-bf16.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-bf16.gguf) | bf16 | 15.24GB | false | Full BF16 weights. |
| [AceReason-Nemotron-1.1-7B-Q8_0.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. |
| [AceReason-Nemotron-1.1-7B-Q6_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q6_K.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q5_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q4_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q4_1.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_1.gguf) | Q4_1 | 4.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [AceReason-Nemotron-1.1-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q3_K_XL.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [AceReason-Nemotron-1.1-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q4_0.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [AceReason-Nemotron-1.1-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ4_NL.gguf) | IQ4_NL | 4.44GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [AceReason-Nemotron-1.1-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
| [AceReason-Nemotron-1.1-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [AceReason-Nemotron-1.1-7B-IQ3_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [AceReason-Nemotron-1.1-7B-Q2_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [AceReason-Nemotron-1.1-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
| [AceReason-Nemotron-1.1-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [AceReason-Nemotron-1.1-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [AceReason-Nemotron-1.1-7B-Q2_K.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. |
| [AceReason-Nemotron-1.1-7B-IQ2_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF --include "nvidia_AceReason-Nemotron-1.1-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF --include "nvidia_AceReason-Nemotron-1.1-7B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (nvidia_AceReason-Nemotron-1.1-7B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
dgambettaphd/M_llm2_run2_gen7_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-17T20:45:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T20:45:28Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ajaymin28/Gemma3_ObjeDet
|
ajaymin28
| 2025-06-17T20:42:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T22:45:05Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
claudiaMartinez1982/xlm-roberta-large_bs4
|
claudiaMartinez1982
| 2025-06-17T20:42:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:39:09Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large_bs4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_bs4
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0033
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 1.1881 | 0.6435 | 500 | 1.0335 | 0.0 | 0.0 | 0.0 | 0.8081 |
| 1.0929 | 1.2870 | 1000 | 1.0046 | 0.0 | 0.0 | 0.0 | 0.8081 |
| 1.1582 | 1.9305 | 1500 | 1.0025 | 0.0 | 0.0 | 0.0 | 0.8081 |
| 1.1784 | 2.5740 | 2000 | 1.0033 | 0.0 | 0.0 | 0.0 | 0.8081 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
progitanas1/anasforvismazarsmodel
|
progitanas1
| 2025-06-17T20:38:06Z | 0 | 0 |
fastai
|
[
"fastai",
"PDF",
"Q&A",
"LLM",
"HuggingFace",
"Qdrant",
"n8n",
"document",
"intelligence",
"semantic",
"document-question-answering",
"fr",
"dataset:LLM4SCIENCE/uparxive_boxed_pdf",
"arxiv:1910.09700",
"base_model:mistralai/Devstral-Small-2505",
"base_model:finetune:mistralai/Devstral-Small-2505",
"region:us"
] |
document-question-answering
| 2025-06-17T20:18:05Z |
---
datasets:
- LLM4SCIENCE/uparxive_boxed_pdf
language:
- fr
metrics:
- f1
- sign/signwriting_similarity
- exact_match
- ecody726/bertscore
base_model:
- mistralai/Magistral-Small-2506
- mistralai/Devstral-Small-2505
- sentence-transformers/all-MiniLM-L6-v2
new_version: mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: document-question-answering
library_name: fastai
tags:
- PDF
- Q&A
- LLM
- HuggingFace
- Qdrant
- n8n
- document
- intelligence
- semantic
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
claudiaMartinez1982/bert-base-spanish-wwm-cased_bs16
|
claudiaMartinez1982
| 2025-06-17T20:34:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:31:04Z |
---
library_name: transformers
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased_bs16
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0283
- Precision: 0.9720
- Recall: 0.9733
- F1: 0.9727
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.025 | 2.5641 | 500 | 0.0283 | 0.9720 | 0.9733 | 0.9727 | 0.9944 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
claudiaMartinez1982/bert-base-spanish-wwm-cased_bs8
|
claudiaMartinez1982
| 2025-06-17T20:32:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:29:49Z |
---
library_name: transformers
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased_bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased_bs8
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0242
- Precision: 0.9752
- Recall: 0.9714
- F1: 0.9733
- Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0584 | 1.2853 | 500 | 0.0385 | 0.9389 | 0.9512 | 0.9450 | 0.9904 |
| 0.0186 | 2.5707 | 1000 | 0.0242 | 0.9752 | 0.9714 | 0.9733 | 0.9948 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline_1414
|
luckeciano
| 2025-06-17T20:31:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T15:57:25Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline_1414
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline_1414
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline_1414", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/2sl9pvsu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
asm3515/bert_agnews_lora_rank2
|
asm3515
| 2025-06-17T20:27:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T20:27:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
altaweel/gemma-3-1b-ultrasound
|
altaweel
| 2025-06-17T20:23:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:23:01Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** altaweel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quanda-bench-test/f1c529c-default_LDS_lds_subset_3
|
quanda-bench-test
| 2025-06-17T20:23:24Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-17T20:17:37Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
quanda-bench-test/f1c529c-default_LDS_lds_subset_2
|
quanda-bench-test
| 2025-06-17T20:23:22Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-17T20:17:34Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
quanda-bench-test/f1c529c-default_LDS_lds_subset_0
|
quanda-bench-test
| 2025-06-17T20:23:16Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-17T20:17:28Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
mob2711/qwen2.5-3b-qlora-cot-ht-5000
|
mob2711
| 2025-06-17T20:20:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T21:33:27Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FormlessAI/a7de139a-a106-44d3-b0e4-a62949f1c4f7
|
FormlessAI
| 2025-06-17T20:20:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T20:13:11Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: a7de139a-a106-44d3-b0e4-a62949f1c4f7
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for a7de139a-a106-44d3-b0e4-a62949f1c4f7
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/a7de139a-a106-44d3-b0e4-a62949f1c4f7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/dwohepgo)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KasuleTrevor/Runynkore_speech_to_intent_multilingual_xlsr
|
KasuleTrevor
| 2025-06-17T20:20:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"text-classification",
"generated_from_trainer",
"base_model:KasuleTrevor/wav2vec2-xls-r-300m-multilingual_filtered-yogera-v3",
"base_model:finetune:KasuleTrevor/wav2vec2-xls-r-300m-multilingual_filtered-yogera-v3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T20:19:19Z |
---
library_name: transformers
base_model: KasuleTrevor/wav2vec2-xls-r-300m-multilingual_filtered-yogera-v3
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Runynkore_speech_to_intent_multilingual_xlsr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Runynkore_speech_to_intent_multilingual_xlsr
This model is a fine-tuned version of [KasuleTrevor/wav2vec2-xls-r-300m-multilingual_filtered-yogera-v3](https://huggingface.co/KasuleTrevor/wav2vec2-xls-r-300m-multilingual_filtered-yogera-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1395
- Accuracy: 0.9722
- Precision: 0.9746
- Recall: 0.9722
- F1: 0.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 88 | 2.8125 | 0.1405 | 0.1094 | 0.1405 | 0.0721 |
| 2.9239 | 2.0 | 176 | 1.2254 | 0.7149 | 0.6086 | 0.7149 | 0.6258 |
| 1.8873 | 3.0 | 264 | 0.1759 | 0.9796 | 0.9810 | 0.9796 | 0.9798 |
| 0.3484 | 4.0 | 352 | 0.1055 | 0.9796 | 0.9807 | 0.9796 | 0.9797 |
| 0.1375 | 5.0 | 440 | 0.1020 | 0.9796 | 0.9807 | 0.9796 | 0.9797 |
| 0.1062 | 6.0 | 528 | 0.0892 | 0.9817 | 0.9828 | 0.9817 | 0.9817 |
| 0.0891 | 7.0 | 616 | 0.0808 | 0.9837 | 0.9843 | 0.9837 | 0.9837 |
| 0.0687 | 8.0 | 704 | 0.0671 | 0.9817 | 0.9830 | 0.9817 | 0.9816 |
| 0.0687 | 9.0 | 792 | 0.0760 | 0.9796 | 0.9802 | 0.9796 | 0.9796 |
| 0.0501 | 10.0 | 880 | 0.0818 | 0.9817 | 0.9826 | 0.9817 | 0.9817 |
| 0.0411 | 11.0 | 968 | 0.0820 | 0.9857 | 0.9865 | 0.9857 | 0.9858 |
| 0.0259 | 12.0 | 1056 | 0.0812 | 0.9796 | 0.9800 | 0.9796 | 0.9796 |
| 0.032 | 13.0 | 1144 | 0.0794 | 0.9817 | 0.9822 | 0.9817 | 0.9817 |
| 0.0266 | 14.0 | 1232 | 0.0976 | 0.9817 | 0.9826 | 0.9817 | 0.9817 |
| 0.0211 | 15.0 | 1320 | 0.1037 | 0.9796 | 0.9806 | 0.9796 | 0.9797 |
| 0.0178 | 16.0 | 1408 | 0.0949 | 0.9796 | 0.9802 | 0.9796 | 0.9797 |
| 0.0178 | 17.0 | 1496 | 0.0901 | 0.9817 | 0.9827 | 0.9817 | 0.9817 |
| 0.0164 | 18.0 | 1584 | 0.0936 | 0.9817 | 0.9825 | 0.9817 | 0.9817 |
| 0.017 | 19.0 | 1672 | 0.0939 | 0.9817 | 0.9827 | 0.9817 | 0.9817 |
| 0.0147 | 20.0 | 1760 | 0.0940 | 0.9817 | 0.9827 | 0.9817 | 0.9817 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kbatyshchev/thesis-llm-financial-forecaster
|
kbatyshchev
| 2025-06-17T20:19:44Z | 144 | 1 | null |
[
"bert",
"region:us"
] | null | 2025-06-04T19:29:55Z |
# Multimodal Price Signal Predictor
This model predicts a trading signal (“Buy”, “Hold”, “Sell”) for selected stocks and cryptocurrencies by directly leveraging both technical indicators and raw news headlines as inputs.
**Input:**
- Technical/time-series features
- Raw news headline text
**Output:**
- A single predicted return value 1
**Signal Interpretation:**
- **Buy:** if return > 1.03
- **Sell:** if return < 0.97
- **Hold:** otherwise
**Datasets Used:**
- News data: https://www.kaggle.com/datasets/parzik/news-thesis
- Time series: https://www.kaggle.com/datasets/parzik/thesis-timeseries-ready-to-use
**Example output:**
```
Output return: 0.94
→ Action: sell
```
**Notes:**
- If something does not work, the changes can be demanded from [email protected]
|
furkankarakuz/test-marian-finetuned-kde4-en-to-fr
|
furkankarakuz
| 2025-06-17T20:17:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-17T14:19:45Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: test-marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 32.66555156176086
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8554
- Model Preparation Time: 0.0328
- Bleu: 32.6656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.