modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
satarupa22/whisper-small-asr
|
satarupa22
| 2025-04-03T14:48:50Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-23T15:53:10Z
|
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-small-asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-asr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Ayamohamed/DiaClassModel
|
Ayamohamed
| 2025-04-03T14:48:15Z
| 4
| 0
|
torch
|
[
"torch",
"resnet",
"image-classification",
"diagrams",
"pytorch",
"computer-vision",
"dataset:phiyodr/coco2017",
"dataset:HuggingFaceM4/ChartQA",
"dataset:JasmineQiuqiu/diagrams_with_captions_2",
"base_model:microsoft/resnet-18",
"base_model:finetune:microsoft/resnet-18",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2025-03-27T15:31:29Z
|
---
library_name: torch
tags:
- image-classification
- resnet
- diagrams
- pytorch
- computer-vision
license: apache-2.0
metrics:
- accuracy
- f1
- recall
- precision
base_model:
- microsoft/resnet-18
pipeline_tag: image-classification
datasets:
- phiyodr/coco2017
- HuggingFaceM4/ChartQA
- JasmineQiuqiu/diagrams_with_captions_2
---
# Model Card for Diagram Classification Model
## Model Details
### Model Description
This is a fine-tuned ResNet-18 model trained for binary image classification, distinguishing between **diagrams** and **non-diagrams**. The model is designed for use in applications that need automatic filtering or processing of diagram-based content.
- **Developed by:** Aya Mohamed
- **Model type:** ResNet-18 (Fine-tuned for image classification)
- **Language(s) (NLP):** Not applicable (Computer Vision model)
- **License:** Apache 2.0
- **Finetuned from model:** `microsoft/resnet-18`
### Model Sources
- **Repository:** [Ayamohamed/diaclass-model](https://huggingface.co/Ayamohamed/diaclass-model)
## Uses
### Direct Use
This model is intended for classifying images as **diagrams** or **non-diagrams**. It can be used in:
- **Document processing** (extracting diagrams from PDFs or scanned documents)
- **Chart-based visual question generation (VQG)**
- **Content moderation** (filtering diagram images from general image datasets)
### Out-of-Scope Use
- Not suitable for **multi-class classification** beyond diagrams vs. non-diagrams.
- Not designed for **hand-drawn sketches** or **complex figures with mixed elements**.
## Bias, Risks, and Limitations
- The model's accuracy depends on the training dataset, which may not cover all possible diagram styles.
- May misclassify **charts, blueprints, or artistic drawings** if they resemble diagrams.
### Recommendations
Users should **evaluate the model** on their specific dataset before deployment to ensure it performs well in their context.
## 🚀 How to Use
### **1️⃣ Load the Model from Hugging Face**
You can download the model and load it using `torch`.
```python
import torch
from huggingface_hub import hf_hub_download
# Download model from Hugging Face Hub
model_path = hf_hub_download(repo_id="Ayamohamed/DiaClassification", filename="model.pth")
# Load model
model_hg = torch.load(model_path)
model_hg.eval() # Set to evaluation mode
```
### **2️⃣ Preprocess and Classify an Image**
```python
from PIL import Image
from torchvision import transforms
# Define Image Transformations
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
def predict(image_path):
image = Image.open(image_path).convert("RGB")
image = transform(image).unsqueeze(0)
with torch.no_grad():
output = model_hg(image)
class_idx = torch.argmax(output, dim=1).item()
return "Diagram" if class_idx == 0 else "Not Diagram"
# Example usage
print(predict("my-diagram-classifier/31188_1536932698.jpg"))
```
## Training Details
### Training Data
The model was trained using:
- **ChartQA dataset** (for diagram samples)
- **JasmineQiuqiu/diagrams_with_captions_2** (for diagram samples)
- **COCO dataset (subset)** (for non-diagram samples)
### Training Procedure
- **Pretrained model:** `microsoft/resnet-18`
- **Optimization:** Adam optimizer
- **Loss function:** Cross-entropy loss
- **Training duration:** Approx. X hours on an NVIDIA GPU
## Evaluation
### Testing Data & Metrics
- **Dataset:** Held-out test set from ChartQA, AI2D-RST, and COCO
- **Metrics:**
- **Test Loss:** 0.0371
- **Test Accuracy:** 99.08%
- **Precision:** 0.9995
- **Recall:** 0.9820
- **F1 Score:** 0.9907
## Environmental Impact
- **Hardware Used:** NVIDIA A100 GPU
- **Compute Hours:** Approx. X hours
- **Estimated Carbon Emission:** [Use MLCO2 Calculator](https://mlco2.github.io/impact#compute)
## Citation
If you use this model, please cite:
```bibtex
@misc{aya2025diaclass,
author = {Aya Mohamed},
title = {Diagram Classification Model},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/Ayamohamed/diaclass-model}
}
```
|
Willyromero/xtts-v2-clara-prod
|
Willyromero
| 2025-04-03T14:47:50Z
| 0
| 0
| null |
[
"license:other",
"region:us"
] | null | 2025-04-03T08:50:16Z
|
---
license: other
license_name: coqui-public-model-license
license_link: LICENSE
---
|
moyixiao/qwen15_0403_4096_badam
|
moyixiao
| 2025-04-03T14:46:52Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T14:44:56Z
|
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xw17/Phi-3.5-mini-instruct_finetuned_3_def_lora3
|
xw17
| 2025-04-03T14:46:44Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T14:46:36Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf
|
RichardErkhov
| 2025-04-03T14:45:09Z
| 0
| 0
| null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T12:41:02Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-3-mini-4k-finetuned-full - GGUF
- Model creator: https://huggingface.co/Sabyasachi/
- Original model: https://huggingface.co/Sabyasachi/phi-3-mini-4k-finetuned-full/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-3-mini-4k-finetuned-full.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q2_K.gguf) | Q2_K | 1.32GB |
| [phi-3-mini-4k-finetuned-full.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [phi-3-mini-4k-finetuned-full.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [phi-3-mini-4k-finetuned-full.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [phi-3-mini-4k-finetuned-full.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [phi-3-mini-4k-finetuned-full.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q3_K.gguf) | Q3_K | 1.82GB |
| [phi-3-mini-4k-finetuned-full.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [phi-3-mini-4k-finetuned-full.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [phi-3-mini-4k-finetuned-full.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [phi-3-mini-4k-finetuned-full.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q4_0.gguf) | Q4_0 | 2.03GB |
| [phi-3-mini-4k-finetuned-full.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [phi-3-mini-4k-finetuned-full.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [phi-3-mini-4k-finetuned-full.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q4_K.gguf) | Q4_K | 2.23GB |
| [phi-3-mini-4k-finetuned-full.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [phi-3-mini-4k-finetuned-full.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q4_1.gguf) | Q4_1 | 2.24GB |
| [phi-3-mini-4k-finetuned-full.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q5_0.gguf) | Q5_0 | 2.46GB |
| [phi-3-mini-4k-finetuned-full.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [phi-3-mini-4k-finetuned-full.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q5_K.gguf) | Q5_K | 2.62GB |
| [phi-3-mini-4k-finetuned-full.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [phi-3-mini-4k-finetuned-full.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q5_1.gguf) | Q5_1 | 2.68GB |
| [phi-3-mini-4k-finetuned-full.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q6_K.gguf) | Q6_K | 2.92GB |
| [phi-3-mini-4k-finetuned-full.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sabyasachi_-_phi-3-mini-4k-finetuned-full-gguf/blob/main/phi-3-mini-4k-finetuned-full.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seifetho/Llama-3.1-8B-bnb-4bit-python
|
seifetho
| 2025-04-03T14:44:44Z
| 30
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-24T15:50:28Z
|
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** seifetho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Superrrdamn/task-7-microsoft-Phi-4-mini-instruct
|
Superrrdamn
| 2025-04-03T12:28:02Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-04-03T12:27:59Z
|
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf
|
RichardErkhov
| 2025-04-03T12:27:23Z
| 0
| 0
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T10:51:47Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi35_tictactoe_dpo2epoch_v5 - GGUF
- Model creator: https://huggingface.co/ihughes15234/
- Original model: https://huggingface.co/ihughes15234/phi35_tictactoe_dpo2epoch_v5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi35_tictactoe_dpo2epoch_v5.Q2_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q2_K.gguf) | Q2_K | 1.35GB |
| [phi35_tictactoe_dpo2epoch_v5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [phi35_tictactoe_dpo2epoch_v5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [phi35_tictactoe_dpo2epoch_v5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [phi35_tictactoe_dpo2epoch_v5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [phi35_tictactoe_dpo2epoch_v5.Q3_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q3_K.gguf) | Q3_K | 1.75GB |
| [phi35_tictactoe_dpo2epoch_v5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [phi35_tictactoe_dpo2epoch_v5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [phi35_tictactoe_dpo2epoch_v5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [phi35_tictactoe_dpo2epoch_v5.Q4_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q4_0.gguf) | Q4_0 | 2.03GB |
| [phi35_tictactoe_dpo2epoch_v5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [phi35_tictactoe_dpo2epoch_v5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [phi35_tictactoe_dpo2epoch_v5.Q4_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q4_K.gguf) | Q4_K | 2.16GB |
| [phi35_tictactoe_dpo2epoch_v5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [phi35_tictactoe_dpo2epoch_v5.Q4_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q4_1.gguf) | Q4_1 | 2.24GB |
| [phi35_tictactoe_dpo2epoch_v5.Q5_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q5_0.gguf) | Q5_0 | 2.46GB |
| [phi35_tictactoe_dpo2epoch_v5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [phi35_tictactoe_dpo2epoch_v5.Q5_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q5_K.gguf) | Q5_K | 2.53GB |
| [phi35_tictactoe_dpo2epoch_v5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [phi35_tictactoe_dpo2epoch_v5.Q5_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q5_1.gguf) | Q5_1 | 2.68GB |
| [phi35_tictactoe_dpo2epoch_v5.Q6_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q6_K.gguf) | Q6_K | 2.92GB |
| [phi35_tictactoe_dpo2epoch_v5.Q8_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo2epoch_v5-gguf/blob/main/phi35_tictactoe_dpo2epoch_v5.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
base_model: ihughes15234/phi35_tictactoe_dpo1epoch_v5
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi35_tictactoe_dpo1epoch_v5
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eckilibrium/w2v-bert-2.0-dysarthric-child-de_20ep
|
Eckilibrium
| 2025-04-03T12:27:16Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-04-03T11:59:25Z
|
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-dysarthric-child-de_20ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-dysarthric-child-de_20ep
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7723
- Wer: 0.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| No log | 1.0 | 18 | 18.1755 | 1.0408 |
| 78.6001 | 2.0 | 36 | 8.0284 | 1.0 |
| 35.1971 | 3.0 | 54 | 3.5255 | 1.0 |
| 35.1971 | 4.0 | 72 | 3.2512 | 1.0 |
| 13.3401 | 5.0 | 90 | 3.1447 | 1.0 |
| 12.4582 | 6.0 | 108 | 2.9271 | 1.0 |
| 11.2356 | 7.0 | 126 | 2.6043 | 1.0 |
| 11.2356 | 8.0 | 144 | 2.1627 | 1.0 |
| 8.9596 | 9.0 | 162 | 1.8655 | 1.0 |
| 6.8221 | 10.0 | 180 | 1.6269 | 1.0021 |
| 6.8221 | 11.0 | 198 | 1.6577 | 0.9957 |
| 5.15 | 12.0 | 216 | 1.4913 | 1.0 |
| 4.0202 | 13.0 | 234 | 1.3987 | 0.9936 |
| 3.2137 | 14.0 | 252 | 1.5071 | 0.9721 |
| 3.2137 | 15.0 | 270 | 1.4261 | 0.9721 |
| 2.4846 | 16.0 | 288 | 1.4136 | 0.9549 |
| 1.8232 | 17.0 | 306 | 1.5552 | 0.9399 |
| 1.8232 | 18.0 | 324 | 1.5154 | 0.9270 |
| 1.5073 | 18.9014 | 340 | 1.7723 | 0.9592 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
xw17/Qwen2-1.5B-Instruct_finetuned_3_def_lora3
|
xw17
| 2025-04-03T12:27:12Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T12:27:06Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
egeozsoy/Qwen2-0.5B-GRPO-test
|
egeozsoy
| 2025-04-03T12:27:00Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-17T12:34:50Z
|
---
base_model: Qwen/Qwen2-1.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="egeozsoy/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/egeozsoy/huggingface/runs/r2ss6153)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
harshitha672/Phi_4_model_Finetuned_GitaGPT
|
harshitha672
| 2025-04-03T12:26:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T12:10:58Z
|
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harshitha672
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mael7307/Llama-3.2-3B-Instruct_CoT-40steps
|
Mael7307
| 2025-04-03T12:25:13Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T12:23:35Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Mael7307
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aarath97/my-awesome-adapter
|
aarath97
| 2025-04-03T12:24:41Z
| 0
| 0
|
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"dataset:rotten_tomatoes",
"region:us"
] | null | 2025-04-03T11:49:46Z
|
---
tags:
- roberta
- adapter-transformers
datasets:
- rotten_tomatoes
---
# Adapter `aarath97/my-awesome-adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rotten_tomatoes](https://huggingface.co/datasets/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("aarath97/my-awesome-adapter", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
weizhepei/Qwen2.5-3B-WebArena-Lite-SFT-CoT-QwQ-32B-epoch-3-no-packing
|
weizhepei
| 2025-04-03T12:24:06Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:weizhepei/webarena-lite-SFT-CoT-QwQ-32B",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T10:16:20Z
|
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: weizhepei/webarena-lite-SFT-CoT-QwQ-32B
library_name: transformers
model_name: Qwen2.5-3B-WebArena-Lite-SFT-CoT-QwQ-32B-epoch-3-no-packing
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-3B-WebArena-Lite-SFT-CoT-QwQ-32B-epoch-3-no-packing
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [weizhepei/webarena-lite-SFT-CoT-QwQ-32B](https://huggingface.co/datasets/weizhepei/webarena-lite-SFT-CoT-QwQ-32B) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="weizhepei/Qwen2.5-3B-WebArena-Lite-SFT-CoT-QwQ-32B-epoch-3-no-packing", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/uva-llm/huggingface/runs/tlqes274)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kostus/flux-dev-lora-sonar3
|
kostus
| 2025-04-03T12:23:50Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-03T12:23:48Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SONAR3
---
# Flux Dev Lora Sonar3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SONAR3` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SONAR3",
"lora_weights": "https://huggingface.co/kostus/flux-dev-lora-sonar3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kostus/flux-dev-lora-sonar3', weight_name='lora.safetensors')
image = pipeline('SONAR3').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/kostus/flux-dev-lora-sonar3/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/medical-phi-FineTuned-i1-GGUF
|
mradermacher
| 2025-04-03T12:23:20Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:fasghar786/medical-phi-FineTuned",
"base_model:quantized:fasghar786/medical-phi-FineTuned",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-04-03T11:15:25Z
|
---
base_model: fasghar786/medical-phi-FineTuned
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/fasghar786/medical-phi-FineTuned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/medical-phi-FineTuned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q2_K.gguf) | i1-Q2_K | 1.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q4_1.gguf) | i1-Q4_1 | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/medical-phi-FineTuned-i1-GGUF/resolve/main/medical-phi-FineTuned.i1-Q6_K.gguf) | i1-Q6_K | 2.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Delta-Vector/Humanize-Rei-Slerp
|
Delta-Vector
| 2025-04-03T12:22:24Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Delta-Vector/Rei-V2-12B",
"base_model:merge:Delta-Vector/Rei-V2-12B",
"base_model:cgato/Nemo-12b-Humanize-KTO-Experimental-2",
"base_model:merge:cgato/Nemo-12b-Humanize-KTO-Experimental-2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T12:14:25Z
|
---
base_model:
- cgato/Nemo-12b-Humanize-KTO-Experimental-2
- Delta-Vector/Rei-V2-12B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [cgato/Nemo-12b-Humanize-KTO-Experimental-2](https://huggingface.co/cgato/Nemo-12b-Humanize-KTO-Experimental-2)
* [Delta-Vector/Rei-V2-12B](https://huggingface.co/Delta-Vector/Rei-V2-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Delta-Vector/Rei-V2-12B
- model: cgato/Nemo-12b-Humanize-KTO-Experimental-2
merge_method: slerp
base_model: Delta-Vector/Rei-V2-12B
parameters:
t:
- value: 0.2
dtype: bfloat16
tokenizer_source: base
```
|
MeowKun/bhutanese-textile-model
|
MeowKun
| 2025-04-03T12:21:27Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-04-03T12:16:10Z
|
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: bhutanese-textile-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 1.7735 | 0.6696 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
asdasdaTes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_freckled_worm
|
asdasdaTes
| 2025-04-03T12:21:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am silent freckled worm",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T12:14:49Z
|
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_freckled_worm
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am silent freckled worm
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_freckled_worm
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="asdasdaTes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_freckled_worm", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Cuidarte/Photo-Realism
|
Cuidarte
| 2025-04-03T12:20:54Z
| 8
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T02:35:41Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0e\0a\0r\0l\0y\0 \02\00\01\00\0s\0 \0s\0n\0a\0p\0s\0h\0o\0t\0 \0p\0h\0o\0t\0o\0 \0c\0a\0p\0t\0u\0r\0e\0d\0 \0w\0i\0t\0h\0 \0a\0 \0p\0h\0o\0n\0e\0 \0a\0n\0d\0 \0s\0a\0v\0e\0d\0 \0a\0s\0 \0I\0M\0G\0_\02\00\01\08\0.\0C\0R\02\0,\0 \0A\0 \0s\0e\0l\0f\0i\0e\0 \0o\0f\0 \0a\0 \0y\0o\0u\0n\0g\0 \0j\0a\0p\0a\0n\0e\0s\0e\0 \0w\0o\0m\0a\0n\0 \0s\0t\0a\0n\0d\0i\0n\0g\0 \0i\0n\0 \0f\0r\0o\0n\0t\0 \0o\0f\0 \0a\0 \0c\0o\0n\0c\0r\0e\0t\0e\0 \0w\0a\0l\0l\0 \0w\0i\0t\0h\0 \0g\0r\0a\0f\0f\0i\0t\0i\0 \0o\0n\0 \0i\0t\0 \0t\0h\0a\0t\0 \0r\0e\0a\0d\0s\0 \0\"\0I\0m\0p\0r\0o\0v\0e\0d\0 \0S\0k\0i\0n\0 \0+\0 \0R\0e\0a\0l\0i\0s\0m\0\"\0.\0 \0S\0h\0e\0 \0i\0s\0 \0w\0e\0a\0r\0i\0n\0g\0 \0a\0 \0s\0l\0i\0g\0h\0t\0l\0y\0 \0o\0f\0f\0-\0s\0h\0o\0u\0l\0d\0e\0r\0 \0b\0a\0g\0g\0y\0 \0w\0h\0i\0t\0e\0 \0t\0-\0s\0h\0i\0r\0t\0 \0w\0i\0t\0h\0 \0t\0h\0e\0 \0t\0e\0x\0t\0 \0\"\0I\0m\0p\0r\0o\0v\0e\0d\0 \0A\0m\0a\0t\0e\0u\0r\0 \0S\0n\0a\0p\0s\0h\0o\0t\0 \0P\0h\0o\0t\0o\0 \0R\0e\0a\0l\0i\0s\0m\0 \0v\01\02\0\"\0 \0w\0r\0i\0t\0t\0e\0n\0 \0o\0n\0 \0i\0t\0.\0 \0S\0h\0e\0 \0h\0a\0s\0 \0s\0h\0o\0r\0t\0 \0h\0a\0i\0r\0 \0s\0t\0y\0l\0e\0d\0 \0i\0n\0 \0a\0 \0b\0o\0b\0c\0u\0t\0 \0a\0n\0d\0 \0d\0y\0e\0d\0 \0i\0n\0 \0m\0u\0l\0t\0i\0p\0l\0e\0 \0r\0a\0i\0n\0b\0o\0w\0-\0l\0i\0k\0e\0 \0c\0o\0l\0o\0r\0s\0.\0 \0S\0h\0e\0 \0i\0s\0 \0h\0a\0p\0p\0y\0 \0a\0n\0d\0 \0s\0m\0i\0l\0i\0n\0g\0.\0 \0S\0h\0o\0t\0 \0d\0u\0r\0i\0n\0g\0 \0t\0h\0e\0 \0d\0a\0y\0 \0w\0i\0t\0h\0 \0n\0a\0t\0u\0r\0a\0l\0 \0l\0i\0g\0h\0t\0i\0n\0g\0 \0a\0n\0d\0 \0s\0u\0n\0s\0h\0i\0n\0e\0.\0"
output:
url: images/1000028692.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Photo-Realism
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Cuidarte/Photo-Realism/tree/main) them in the Files & versions tab.
|
ThatDustyGuy/PersonalFluxFinetune
|
ThatDustyGuy
| 2025-04-03T12:16:30Z
| 0
| 0
| null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-04-03T02:17:44Z
|
---
license: apache-2.0
---
|
corn6/DeepSeek-R1-Medical-COT
|
corn6
| 2025-04-03T12:14:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T11:46:23Z
|
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** corn6
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BSC-NLP4BIA/binary-gender-classifier
|
BSC-NLP4BIA
| 2025-04-03T12:13:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"finebert",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:58:07Z
|
---
library_name: transformers
tags: []
---
# BiGenderDetection Model Card
## Model Summary
This is a fine-tuned version of the `dccuchile/bert-base-spanish-wwm-cased` model for binary gender classification. The model was trained on a Spanish biomedical dataset to classify text into two categories: female and male.
## Model Details
- **Base Model:** `dccuchile/bert-base-spanish-wwm-cased`
- **Architecture:** FineBERT (custom classifier layers)
- **Number of Labels:** 2 (female, male)
- **Language:** Spanish
- **Problem Type:** Single-label classification
- **Maximum Sequence Length:** 512
- **Dropout:** 0.4
- **Activation Function:** ReLU
- **Output Dimension:** 1
- **BERT Frozen:** No
## Training Details
- **Dataset:** Custom dataset derived from the SPACCC corpus, preprocessed to exclude undetermined labels.
- **Training Epochs:** 25
- **Batch Size:** 8
- **Learning Rate:** 2e-5
- **Optimizer:** AdamW
- **Loss Function:** Binary Cross Entropy Loss (BCELoss)
- **Weight Decay:** 0.01
- **Warmup Steps:** 0
- **Scheduler Factor:** 0.5
- **Scheduler Patience:** 2
- **Early Stopping Patience:** 8
- **Evaluation Strategy:** Per epoch
- **Device:** CUDA
- **Framework:** 🤗 Transformers
## Model Usage
The model is designed for gender classification in Spanish biomedical texts. Given an input text, it predicts one of two classes: female or male.
## How to Use
```python
from transformers import AutoTokenizer
import torch
from model import FineBERTModel # Import your custom model class
from utils.import_config import FineBERTConfig
# Load configuration
config = FineBERTConfig.from_pretrained("path/to/saved_models/BiGenderDetection")
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-cased")
model = FineBERTModel.from_pretrained("path/to/saved_models/BiGenderDetection", config=config)
text = "Paciente femenina de 45 años con antecedentes de hipertensión."
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
# Get predictions
with torch.no_grad():
logits = model.get_logits(**inputs)
prediction = torch.round(torch.sigmoid(logits)).detach().numpy()
print(prediction)
```
## Limitations
- The model is trained on Spanish biomedical text and may not generalize well to other domains.
- Gender classification based on text is inherently challenging and may be influenced by biases in the training data.
## Acknowledgments
This model is based on `dccuchile/bert-base-spanish-wwm-cased` and fine-tuned on biomedical data derived from the SPACCC corpus.
|
andsimionato/quadra-gpt2
|
andsimionato
| 2025-04-03T12:12:32Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-03T12:12:32Z
|
---
license: apache-2.0
---
|
ZhiyuanthePony/TriplaneTurbo
|
ZhiyuanthePony
| 2025-04-03T12:10:57Z
| 0
| 4
|
diffusers
|
[
"diffusers",
"text-to-3d",
"arxiv:2503.21694",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:finetune:stabilityai/stable-diffusion-2-1-base",
"license:apache-2.0",
"region:us"
] |
text-to-3d
| 2025-03-02T07:14:49Z
|
---
base_model:
- stabilityai/stable-diffusion-2-1-base
license: apache-2.0
pipeline_tag: text-to-3d
library_name: diffusers
paper:
- arxiv.org/abs/2503.21694
---
## File information
The repository contains the following file information:
Note: file information is just provided as context for you, do not add it to the model card.
# Project page
The project page URL we found has the following URL:
# Github README
The Github README we found contains the following content:
<img src="assets/Showcase_v4.drawio.png" width="100%" align="center">
<div align="center">
<h1>Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data</h1>
<div>
<a href='https://scholar.google.com/citations?user=F15mLDYAAAAJ&hl=en' target='_blank'>Zhiyuan Ma</a> 
<a href='https://scholar.google.com/citations?user=R9PlnKgAAAAJ&hl=en' target='_blank'>Xinyue Liang</a> 
<a href='https://scholar.google.com/citations?user=A-U8zE8AAAAJ&hl=en' target='_blank'>Rongyuan Wu</a> 
<a href='https://scholar.google.com/citations?user=1rbNk5oAAAAJ&hl=zh-CN' target='_blank'>Xiangyu Zhu</a> 
<a href='https://scholar.google.com/citations?user=cuJ3QG8AAAAJ&hl=en' target='_blank'>Zhen Lei</a> 
<a href='https://scholar.google.com/citations?user=tAK5l1IAAAAJ&hl=en' target='_blank'>Lei Zhang</a>
</div>
<div>
<a href="https://arxiv.org/abs/2503.21694"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
<a href='https://theericma.github.io/TriplaneTurbo/'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
<a href='https://huggingface.co/spaces/ZhiyuanthePony/TriplaneTurbo'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Live_Demo-blue'></a>
<a href='https://theericma.github.io/TriplaneTurbo/static/pdf/main.pdf'><img src='https://img.shields.io/badge/Slides-Presentation-orange?logo=microsoftpowerpoint&logoColor=white' alt='Presentation Slides'></a>
</div>
---
</div>
<!-- Updates -->
## ⏩ Updates
- **2025-04-01**: Presentation slides are now available for download.
- **2025-03-27**: The paper is now available on Arxiv.
- **2025-03-03**: Gradio and HuggingFace Demos are available.
- **2025-02-27**: TriplaneTurbo is accepted to CVPR 2025.
<!-- Features -->
## 🌟 Features
- **Fast Inference 🚀**: Our code excels in inference efficiency, capable of outputting textured mesh in around 1 second.
- **Text Comprehension 🆙**: It demonstrates strong understanding capabilities for complex text prompts, ensuring accurate generation according to the input.
- **3D-Data-Free Training 🙅♂️**: The entire training process doesn't rely on any 3D datasets, making it more resource-friendly and adaptable.
## 🤖 Start local inference in 3 minutes
If you only wish to set up the demo locally, use the following code for the inference. Otherwise, for training and evaluation, use the next section of instructions for environment setup.
```python
python -m venv venv
source venv/bin/activate
bash setup.sh
python gradio_app.py
```
## 🛠️ Official Installation
Create a virtual environment:
```sh
conda create -n triplaneturbo python=3.10
conda activate triplaneturbo
conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=12.1 -c pytorch -c nvidia
```
(Optional, Recommended) Install xFormers for attention acceleration:
```sh
conda install xFormers -c xFormers
```
(Optional, Recommended) Install ninja to speed up the compilation of CUDA extensions
```sh
pip install ninja
```
Install major dependencies
```sh
pip install -r requirements.txt
```
Install iNGP
```sh
export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
```
If you encounter errors while installing iNGP, it is recommended to check your gcc version. Follow these steps to change the gcc version within your -cconda environment. After that, return to the project directory and reinstall iNGP and NerfAcc:
```sh
conda install -c conda-forge gxx=9.5.0
cd $CONDA_PREFIX/lib
ln -s /usr/lib/x86_64-linux-gnu/libcuda.so ./
cd <your project directory>
```
## 📊 Evaluation
If you only want to run the evaluation without training, follow these steps:
```sh
# Download the model from HuggingFace
huggingface-cli download --resume-download ZhiyuanthePony/TriplaneTurbo \
--include "triplane_turbo_sd_v1.pth" \
--local-dir ./pretrained \
--local-dir-use-symlinks False
# Download evaluation assets
python scripts/prepare/download_eval_only.py
# Run evaluation script
bash scripts/eval/dreamfusion.sh --gpu 0,1 # You can use more GPUs (e.g. 0,1,2,3,4,5,6,7). For single GPU usage, please check the script for required modifications
```
Our evaluation metrics include:
- CLIP Similarity Score
- CLIP Recall@1
For detailed evaluation results, please refer to our paper.
If you want to evaluate your own model, use the following script:
```sh
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python launch.py \
--config <path_to_your_exp_config> \
--export \
system.exporter_type="multiprompt-mesh-exporter" \
resume=<path_to_your_ckpt> \
data.prompt_library="dreamfusion_415_prompt_library" \
system.exporter.fmt=obj
```
After running the script, you will find generated OBJ files in `outputs/<your_exp>/dreamfusion_415_prompt_library/save/<itXXXXX-export>`. Set this path as `<OBJ_DIR>`, and set `outputs/<your_exp>/dreamfusion_415_prompt_library/save/<itXXXXX-4views>` as `<VIEW_DIR>`. Then run:
```sh
SAVE_DIR=<VIEW_DIR>
python evaluation/mesh_visualize.py \
<OBJ_DIR> \
--save_dir $SAVE_DIR \
--gpu 0,1,2,3,4,5,6,7
python evaluation/clipscore/compute.py \
--result_dir $SAVE_DIR
```
The evaluation results will be displayed in your terminal once the computation is complete.
## 🚀 Training Options
### 1. Download Required Pretrained Models and Datasets
Use the provided download script to get all necessary files:
```sh
python scripts/prepare/download_full.py
```
This will download:
- Stable Diffusion 2.1 Base
- Stable Diffusion 1.5
- MVDream 4-view checkpoint
- RichDreamer checkpoint
- Text prompt datasets (3DTopia and DALLE+Midjourney)
### 2. Training Options
#### Option 1: Train with 3DTopia Text Prompts
```sh
# Single GPU
CUDA_VISIBLE_DEVICES=0 python launch.py \
--config configs/TriplaneTurbo_v0_acc-2.yaml \
--train \
data.prompt_library="3DTopia_prompt_library" \
data.condition_processor.cache_dir=".threestudio_cache/text_embeddings_3DTopia" \
data.guidance_processor.cache_dir=".threestudio_cache/text_embeddings_3DTopia"
```
For multi-GPU training:
```sh
# 8 GPUs with 48GB+ memory each
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python launch.py \
--config configs/TriplaneTurbo_v1_acc-2.yaml \
--train \
data.prompt_library="3DTopia_361k_prompt_library" \
data.condition_processor.cache_dir=".threestudio_cache/text_embeddings_3DTopia" \
data.guidance_processor.cache_dir=".threestudio_cache/text_embeddings_3DTopia"
```
#### Option 2: Train with DALLE+Midjourney Text Prompts
Choose the appropriate command based on your GPU configuration:
```sh
# Single GPU
CUDA_VISIBLE_DEVICES=0 python launch.py \
--config configs/TriplaneTurbo_v0_acc-2.yaml \
--train \
data.prompt_library="DALLE_Midjourney_prompt_library" \
data.condition_processor.cache_dir=".threestudio_cache/text_embeddings_DE+MJ" \
data.guidance_processor.cache_dir=".threestudio_cache/text_embeddings_DE+MJ"
```
For multi-GPU training (higher performance):
```sh
# 8 GPUs with 48GB+ memory each
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python launch.py \
--config configs/TriplaneTurbo_v1_acc-2.yaml \
--train \
data.prompt_library="DALLE_Midjourney_prompt_library" \
data.condition_processor.cache_dir=".threestudio_cache/text_embeddings_DE+MJ" \
data.guidance_processor.cache_dir=".threestudio_cache/text_embeddings_DE+MJ"
```
### 3. Configuration Notes
- **Memory Requirements**:
- v1 configuration: Requires GPUs with 48GB+ memory
- v0 configuration: Works with GPUs that have less memory (46GB+) but with reduced performance
- **Acceleration Options**:
- Use `_acc-2.yaml` configs for gradient accumulation to reduce memory usage
- **Advanced Options**:
- For highest quality, use `configs/TriplaneTurbo_v1.yaml` with `system.parallel_guidance=true` (requires 98GB+ memory GPUs)
- To disable certain guidance components: add `guidance.rd_weight=0 guidance.sd_weight=0` to the command
<!-- Citation -->
## 📜 Citation
If you find this work helpful, please consider citing our paper:
```
@article{ma2025progressive,
title={Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data},
author={Ma, Zhiyuan and Liang, Xinyue and Wu, Rongyuan and Zhu, Xiangyu and Lei, Zhen and Zhang, Lei},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
year={2025}
}
```
<!-- Acknowledgement -->
## 🙏 Acknowledgement
Our code is heavily based on the following works
- [ThreeStudio](https://github.com/threestudio-project/threestudio): A clean and extensible codebase for 3D generation via Score Distillation.
- [MVDream](https://github.com/bytedance/MVDream): Used as one of our multi - view teachers.
- [RichDreamer](https://github.com/bytedance/MVDream): Serves as another multi - view teacher for normal and depth supervision
- [3DTopia](https://github.com/3DTopia/3DTopia): Its text caption dataset is applied in our training and comparison.
- [DiffMC](https://github.com/SarahWeiii/diso): Our solution uses its differentiable marching cube for mesh rasterization.
- [NeuS](https://github.com/Totoro97/NeuS): We implement its SDF - based volume rendering for dual rendering in our solution
|
LHRuig/mikevogelsx
|
LHRuig
| 2025-04-03T12:08:44Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:08:25Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mikevogelsx
---
# mikevogelsx
<Gallery />
## Model description
mikevogelsx lora
## Trigger words
You should use `mikevogelsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/mikevogelsx/tree/main) them in the Files & versions tab.
|
HeniM/qwen2-7b-instruct-trl-sft-ChartQA
|
HeniM
| 2025-04-03T12:08:13Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T10:54:26Z
|
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HeniM/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/henimasmoudi6-nativeads-ai/qwen2-7b-instruct-trl-sft-ChartQA/runs/lkd3dbl8)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
babsii/vit-base-oxford-iiit-pets
|
babsii
| 2025-04-03T12:07:01Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-04-03T09:27:41Z
|
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1903
- Accuracy: 0.9553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3831 | 1.0 | 370 | 0.3375 | 0.9066 |
| 0.2 | 2.0 | 740 | 0.2736 | 0.9202 |
| 0.1622 | 3.0 | 1110 | 0.2580 | 0.9229 |
| 0.1309 | 4.0 | 1480 | 0.2469 | 0.9215 |
| 0.1253 | 5.0 | 1850 | 0.2435 | 0.9229 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero Shot Evaluation
- model: openai/clip-vit-large-patch14
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
|
LHRuig/paulwalkersx
|
LHRuig
| 2025-04-03T12:05:59Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:05:47Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: paulwalkersx
---
# paulwalkersx
<Gallery />
## Model description
paulwalkersx lora
## Trigger words
You should use `paulwalkersx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/paulwalkersx/tree/main) them in the Files & versions tab.
|
LHRuig/rafaromerasx
|
LHRuig
| 2025-04-03T12:05:32Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:05:11Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: rafaromerasx
---
# rafaromerasx
<Gallery />
## Model description
rafaromerasx lora
## Trigger words
You should use `rafaromerasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/rafaromerasx/tree/main) them in the Files & versions tab.
|
MatricariaV/byt5-error-correction
|
MatricariaV
| 2025-04-03T12:05:28Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-03T12:04:29Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/carloscuevasx
|
LHRuig
| 2025-04-03T12:05:09Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:04:35Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: carloscuevasx
---
# carloscuevasx
<Gallery />
## Model description
carloscuevasx lora
## Trigger words
You should use `carloscuevasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/carloscuevasx/tree/main) them in the Files & versions tab.
|
jnjj/Bitnet-llama
|
jnjj
| 2025-04-03T12:04:58Z
| 18
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-29T02:52:38Z
|
---
library_name: transformers
---
|
LHRuig/andrelamogliasx
|
LHRuig
| 2025-04-03T12:04:25Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:03:57Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: andrelamogliasx
---
# andrelamogliasx
<Gallery />
## Model description
andrelamogliasx lora
## Trigger words
You should use `andrelamogliasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/andrelamogliasx/tree/main) them in the Files & versions tab.
|
LHRuig/pabloalboransx
|
LHRuig
| 2025-04-03T12:03:56Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:03:07Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: pabloalboransx
---
# pabloalboransx
<Gallery />
## Model description
pabloalboransx lora
## Trigger words
You should use `pabloalboransx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/pabloalboransx/tree/main) them in the Files & versions tab.
|
LHRuig/loganpaulssx
|
LHRuig
| 2025-04-03T12:02:48Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:02:28Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: loganpaulsx
---
# loganpaulsx
<Gallery />
## Model description
loganpaulsx lora
## Trigger words
You should use `loganpaulsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/loganpaulssx/tree/main) them in the Files & versions tab.
|
yallzerno/whiteout_style_v2_LoRA
|
yallzerno
| 2025-04-03T12:01:54Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-04-03T12:01:48Z
|
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: digital concept art in the style of WHITEOUT
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - yallzerno/whiteout_style_v2_LoRA
<Gallery />
## Model description
These are yallzerno/whiteout_style_v2_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use digital concept art in the style of WHITEOUT to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yallzerno/whiteout_style_v2_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
LHRuig/rodrigoguiarodsx
|
LHRuig
| 2025-04-03T12:01:33Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:01:15Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: rodrigoguiarodsx
---
# rodrigoguiarodsx
<Gallery />
## Model description
rodrigoguiarodsx lora
## Trigger words
You should use `rodrigoguiarodsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/rodrigoguiarodsx/tree/main) them in the Files & versions tab.
|
LHRuig/johnbubniaksx
|
LHRuig
| 2025-04-03T12:00:25Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T12:00:05Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: johnbubniaksx
---
# johnbubniaksx
<Gallery />
## Model description
johnbubniaksx lora
## Trigger words
You should use `johnbubniaksx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/johnbubniaksx/tree/main) them in the Files & versions tab.
|
LHRuig/bosinnsx
|
LHRuig
| 2025-04-03T11:59:44Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:59:26Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: bosinnsx
---
# bosinnsx
<Gallery />
## Model description
bosinnsx lora
## Trigger words
You should use `bosinnsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/bosinnsx/tree/main) them in the Files & versions tab.
|
LHRuig/ryanreynoldssx
|
LHRuig
| 2025-04-03T11:59:01Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:58:41Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ryanreynoldssx
---
# ryanreynoldssx
<Gallery />
## Model description
ryanreynoldssx lora
## Trigger words
You should use `ryanreynoldssx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ryanreynoldssx/tree/main) them in the Files & versions tab.
|
HiteshKamwal/KYCOCR
|
HiteshKamwal
| 2025-04-03T11:58:42Z
| 5
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2_vl",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:prithivMLmods/Qwen2-VL-OCR-2B-Instruct",
"base_model:adapter:prithivMLmods/Qwen2-VL-OCR-2B-Instruct",
"license:other",
"region:us"
] | null | 2025-04-02T07:27:33Z
|
---
library_name: peft
license: other
base_model: prithivMLmods/Qwen2-VL-OCR-2B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_2025-04-01-09-06-36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2025-04-01-09-06-36
This model is a fine-tuned version of [prithivMLmods/Qwen2-VL-OCR-2B-Instruct](https://huggingface.co/prithivMLmods/Qwen2-VL-OCR-2B-Instruct) on the OCR_Finetuning_Dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
danruban/gemma3-1b-finetune
|
danruban
| 2025-04-03T11:57:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T11:54:59Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/dylaminnettesx
|
LHRuig
| 2025-04-03T11:57:10Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:56:48Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: dylaminnettesx
---
# dylaminnettesx
<Gallery />
## Model description
dylaminnettesx lora
## Trigger words
You should use `dylaminnettesx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/dylaminnettesx/tree/main) them in the Files & versions tab.
|
rbelanec/lora_04012025151205_mmlu_adv_meta-llama-3.1-8b-instruct
|
rbelanec
| 2025-04-03T11:56:34Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T08:16:08Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/stevenyeunsx
|
LHRuig
| 2025-04-03T11:54:27Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:54:09Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: stevenyeunsx
---
# stevenyeunsx
<Gallery />
## Model description
stevenyeunsx lora
## Trigger words
You should use `stevenyeunsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/stevenyeunsx/tree/main) them in the Files & versions tab.
|
LHRuig/joshoconnorsx
|
LHRuig
| 2025-04-03T11:53:50Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:53:30Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: joshoconnorsx
---
# joshoconnorsx
<Gallery />
## Model description
joshoconnorsx lora
## Trigger words
You should use `joshoconnorsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/joshoconnorsx/tree/main) them in the Files & versions tab.
|
LHRuig/yjakegyllenhaalsx
|
LHRuig
| 2025-04-03T11:53:14Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:52:54Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: yjakegyllenhaalsx
---
# yjakegyllenhaalsx
<Gallery />
## Model description
yjakegyllenhaalsx lora
## Trigger words
You should use `yjakegyllenhaalsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/yjakegyllenhaalsx/tree/main) them in the Files & versions tab.
|
LHRuig/ytimothychalamsx
|
LHRuig
| 2025-04-03T11:52:35Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:52:15Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ytimothychalamsx
---
# ytimothychalamsx
<Gallery />
## Model description
ytimothychalamsx lora
## Trigger words
You should use `ytimothychalamsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ytimothychalamsx/tree/main) them in the Files & versions tab.
|
RichardErkhov/huiwonLee_-_MRC_lora-gguf
|
RichardErkhov
| 2025-04-03T11:51:49Z
| 0
| 0
| null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T11:14:52Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MRC_lora - GGUF
- Model creator: https://huggingface.co/huiwonLee/
- Original model: https://huggingface.co/huiwonLee/MRC_lora/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MRC_lora.Q2_K.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q2_K.gguf) | Q2_K | 1.32GB |
| [MRC_lora.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [MRC_lora.IQ3_S.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [MRC_lora.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [MRC_lora.IQ3_M.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [MRC_lora.Q3_K.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q3_K.gguf) | Q3_K | 1.82GB |
| [MRC_lora.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [MRC_lora.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [MRC_lora.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [MRC_lora.Q4_0.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q4_0.gguf) | Q4_0 | 2.03GB |
| [MRC_lora.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [MRC_lora.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [MRC_lora.Q4_K.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q4_K.gguf) | Q4_K | 2.23GB |
| [MRC_lora.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [MRC_lora.Q4_1.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q4_1.gguf) | Q4_1 | 2.24GB |
| [MRC_lora.Q5_0.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q5_0.gguf) | Q5_0 | 2.46GB |
| [MRC_lora.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [MRC_lora.Q5_K.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q5_K.gguf) | Q5_K | 2.62GB |
| [MRC_lora.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [MRC_lora.Q5_1.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q5_1.gguf) | Q5_1 | 2.68GB |
| [MRC_lora.Q6_K.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q6_K.gguf) | Q6_K | 2.92GB |
| [MRC_lora.Q8_0.gguf](https://huggingface.co/RichardErkhov/huiwonLee_-_MRC_lora-gguf/blob/main/MRC_lora.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prodypanda/pulire-tdm-lora-v1
|
prodypanda
| 2025-04-03T11:51:03Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers",
"lora",
"dreambooth-concept",
"base_model:prodypanda/pulire-towel-dispenser-concept-v1",
"base_model:adapter:prodypanda/pulire-towel-dispenser-concept-v1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-04-03T09:19:44Z
|
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- peft
- dreambooth-concept
base_model: prodypanda/pulire-towel-dispenser-concept-v1
instance_prompt: "a photo of <pulire-tdm> towel dispenser machine"
library_name: peft
---
### Pulire Tdm Lora V1 - LoRA Concept Adapter
This is a LoRA (Low-Rank Adaptation) adapter trained on the `pulire-tdm-lora-v1` concept using the `a photo of <pulire-tdm> towel dispenser machine` trigger.
It was trained on the base model `prodypanda/pulire-towel-dispenser-concept-v1`.
**Trigger Prompt:** `a photo of <pulire-tdm> towel dispenser machine`
#### Usage (with � Diffusers)
```python
from diffusers import StableDiffusionPipeline, AutoencoderKL
import torch
# 1. Load the base model pipeline
base_model_id = "prodypanda/pulire-towel-dispenser-concept-v1"
# Optional: Load a specific VAE if needed
# vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
# pipe = StableDiffusionPipeline.from_pretrained(base_model_id, vae=vae, torch_dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
pipe.to("cuda")
# 2. Load the LoRA adapter weights
lora_adapter_id = "prodypanda/pulire-tdm-lora-v1"
pipe.load_lora_weights(lora_adapter_id)
# Optional: Specify subfolders if weights are organized that way in the repo
# pipe.load_lora_weights(lora_adapter_id, subfolder="unet", weight_name="pytorch_lora_weights.safetensors")
# if text_encoder LoRA exists:
# pipe.load_lora_weights(lora_adapter_id, subfolder="text_encoder", weight_name="pytorch_lora_weights.safetensors")
# 3. Generate images!
prompt = "a photo of <pulire-tdm> towel dispenser machine in a vibrant jungle"
negative_prompt = "low quality, blurry, unrealistic"
# Adjust LoRA weight (optional, 0.0-1.0) - requires Diffusers >= 0.17.0
# image = pipe(prompt, negative_prompt=negative_prompt, cross_attention_kwargs={"scale": 0.8}).images[0]
image = pipe(prompt, negative_prompt=negative_prompt).images[0]
image.save("output_lora.png")
# To unload LoRA and use the base model again:
# pipe.unload_lora_weights()
```
#### Training Images
The following images were used for training this concept:
<div style="display: flex; flex-wrap: wrap; gap: 10px;">
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/869f7a26ff2fddde.jpg" alt="concept image 1" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/92ab9ca72abe41ff.jpg" alt="concept image 2" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/8017c57204117cfa.jpg" alt="concept image 3" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/7fea4afd91e3ea7c.jpg" alt="concept image 4" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/d1a6027d71dcf22c.jpg" alt="concept image 5" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/a3e9e828f07de551.jpg" alt="concept image 6" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/e1206342b2969ff4.jpg" alt="concept image 7" width="150"/>
<img src="https://huggingface.co/prodypanda/pulire-tdm-lora-v1/resolve/main/concept_images/43afa8f0a0485106.jpg" alt="concept image 8" width="150"/>
</div>
---
*LoRA training run using the [� Diffusers](https://github.com/huggingface/diffusers) and [� PEFT](https://github.com/huggingface/peft) libraries.*
|
LHRuig/ymalumasx
|
LHRuig
| 2025-04-03T11:50:56Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:50:46Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ymalumasx
---
# ymalumasx
<Gallery />
## Model description
ymalumasx lora
## Trigger words
You should use `ymalumasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ymalumasx/tree/main) them in the Files & versions tab.
|
LHRuig/yorlandobloomsx
|
LHRuig
| 2025-04-03T11:50:22Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:50:02Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: yorlandobloomsx
---
# yorlandobloomsx
<Gallery />
## Model description
yorlandobloomsx lora
## Trigger words
You should use `yorlandobloomsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/yorlandobloomsx/tree/main) them in the Files & versions tab.
|
yuvale123/Model04e
|
yuvale123
| 2025-04-03T11:49:41Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-03T11:22:52Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: e64a30d8
---
# Model04E
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `e64a30d8` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "e64a30d8",
"lora_weights": "https://huggingface.co/yuvale123/Model04e/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('yuvale123/Model04e', weight_name='lora.safetensors')
image = pipeline('e64a30d8').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/yuvale123/Model04e/discussions) to add images that show off what you’ve made with this LoRA.
|
pokemonying/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_silky_buffalo
|
pokemonying
| 2025-04-03T11:49:15Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am twitchy silky buffalo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T11:42:50Z
|
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_silky_buffalo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am twitchy silky buffalo
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_silky_buffalo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pokemonying/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_silky_buffalo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jahyungu/Qwen2.5-7B-Instruct_Sky-T1-7B-step2-distill-5k
|
jahyungu
| 2025-04-03T11:48:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T08:45:47Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct_Sky-T1-7B-step2-distill-5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_Sky-T1-7B-step2-distill-5k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
nesrich/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gregarious_rabid_chicken
|
nesrich
| 2025-04-03T11:48:22Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am gregarious rabid chicken",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T03:40:09Z
|
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gregarious_rabid_chicken
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am gregarious rabid chicken
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gregarious_rabid_chicken
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nesrich/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gregarious_rabid_chicken", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bambisheng/UltraIF-8B-UltraComposer
|
bambisheng
| 2025-04-03T11:48:16Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:2502.04153",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T09:20:04Z
|
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
pipeline_tag: text-generation
---
# UltraIF-8B-UltraComposer
## Links 🚀
UltraIF model series and data are available at 🤗 HuggingFace.
- 🤖 [UltraComposer](https://huggingface.co/bambisheng/UltraIF-8B-UltraComposer)
- 📖 [SFT Data](https://huggingface.co/datasets/kkk-an/UltraIF-sft-175k) and [SFT Model](https://huggingface.co/bambisheng/UltraIF-8B-SFT)
- ⚖️ [DPO Data](https://huggingface.co/datasets/kkk-an/UltraIF-dpo-20k) and [DPO Model](https://huggingface.co/bambisheng/UltraIF-8B-DPO)
Also check out our 📚 [Paper](https://arxiv.org/abs/2502.04153) and 💻[code](https://github.com/kkk-an/UltraIF)
## Model Description
UltraIF-8B-UltraComposer is a specialized composer that can facilitate the synthesis of wild instructions with more complex and diverse constraints, fine-tuned from [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
## Introduction of UltraIF
UltraIF first constructs the **UltraComposer** by decomposing user instructions into simplified ones and constraints, along with corresponding evaluation questions. This specialized composer facilitates the synthesis of instructions with more complex and diverse constraints, while the evaluation questions ensure the correctness and reliability of the generated responses.
Then, we introduce the **Generate-then-Evaluate** process. This framework first uses UltraComposer to incorporate constraints into instructions and then evaluates the generated responses using corresponding evaluation questions covering various quality levels.

## Usage
Format your input as follows:
```
[history]: {your_chat_history}
[initial query]: {your_query}
```
And the output will be organized in json format:
```json
{"augmented query":.., "question":..}
```
For more details, check out our [official implementation](https://github.com/kkk-an/UltraIF/blob/main/Preprocessing/augment_query.py) for UltraComposer.
## Reference
<br> **📑 If you find our projects helpful to your research, please consider citing:** <br>
```
@article{an2025ultraif,
title={UltraIF: Advancing Instruction Following from the Wild},
author={An, Kaikai and Sheng, Li and Cui, Ganqu and Si, Shuzheng and Ding, Ning and Cheng, Yu and Chang, Baobao},
journal={arXiv preprint arXiv:2502.04153},
year={2025}
}
```
|
outlookAi/Y3FakGV0Et
|
outlookAi
| 2025-04-03T11:47:17Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-03T11:26:10Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MUKKY2
---
# Y3Fakgv0Et
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MUKKY2` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MUKKY2",
"lora_weights": "https://huggingface.co/outlookAi/Y3FakGV0Et/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/Y3FakGV0Et', weight_name='lora.safetensors')
image = pipeline('MUKKY2').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/outlookAi/Y3FakGV0Et/discussions) to add images that show off what you’ve made with this LoRA.
|
LHRuig/yaustinmahonesx
|
LHRuig
| 2025-04-03T11:47:11Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:46:51Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: yaustinmahonesx
---
# yaustinmahonesx
<Gallery />
## Model description
yaustinmahonesx lora
## Trigger words
You should use `yaustinmahonesx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/yaustinmahonesx/tree/main) them in the Files & versions tab.
|
TareksLab/Wordsmith-V5.0-LLaMa-70B
|
TareksLab
| 2025-04-03T11:46:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated",
"base_model:merge:huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T11:35:37Z
|
---
base_model:
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/70B-L3.3-Cirrus-x1
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
- huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B) as a base.
### Models Merged
The following models were included in the merge:
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
* [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1)
* [huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 0.20
density: 0.7
epsilon: 0.1
lambda: 1
- model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
base_model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
merge_method: della_linear
parameters:
normalize: false
tokenizer:
source: union
dtype: bfloat16
chat_template: llama3
```
|
leedongmin125/lee
|
leedongmin125
| 2025-04-03T11:46:23Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-02T09:42:57Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lee
---
# Lee
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lee` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "lee",
"lora_weights": "https://huggingface.co/leedongmin125/lee/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('leedongmin125/lee', weight_name='lora.safetensors')
image = pipeline('lee').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/leedongmin125/lee/discussions) to add images that show off what you’ve made with this LoRA.
|
LHRuig/liampaynesx
|
LHRuig
| 2025-04-03T11:45:12Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:44:40Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: liampaynesx
---
# liampaynesx
<Gallery />
## Model description
liampaynesx lora
## Trigger words
You should use `liampaynesx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/liampaynesx/tree/main) them in the Files & versions tab.
|
TareksLab/Cortex-V4-LLaMA-70B
|
TareksLab
| 2025-04-03T11:45:07Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:Doctor-Shotgun/L3.3-70B-Magnum-v4-SE",
"base_model:merge:Doctor-Shotgun/L3.3-70B-Magnum-v4-SE",
"base_model:Sao10K/70B-L3.3-mhnnn-x1",
"base_model:merge:Sao10K/70B-L3.3-mhnnn-x1",
"base_model:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated",
"base_model:merge:huihui-ai/Llama-3.3-70B-Instruct-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T11:33:43Z
|
---
base_model:
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
- Sao10K/70B-L3.3-mhnnn-x1
- Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- Sao10K/L3.3-70B-Euryale-v2.3
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1)
* [Doctor-Shotgun/L3.3-70B-Magnum-v4-SE](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v4-SE)
* [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: Sao10K/70B-L3.3-mhnnn-x1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: Sao10K/L3.3-70B-Euryale-v2.3
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
parameters:
weight: 0.20
density: 0.7
epsilon: 0.1
lambda: 1.0
merge_method: della_linear
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
```
|
Haricot24601/rl_course_vizdoom_health_gathering_supreme_2
|
Haricot24601
| 2025-04-03T11:45:02Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-03T06:46:12Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.93 +/- 0.38
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Haricot24601/rl_course_vizdoom_health_gathering_supreme_2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
xinyifang/ArxivLlama_HOP
|
xinyifang
| 2025-04-03T11:44:59Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-03T10:43:11Z
|
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xinyifang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ajayy1722/LlamaDPO_adapters
|
ajayy1722
| 2025-04-03T11:44:57Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-04-03T11:44:23Z
|
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
ajayy1722/LlamaDPO_model
|
ajayy1722
| 2025-04-03T11:44:23Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-04-03T11:43:55Z
|
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
iTroned/mix_ensemble_super_long_v1
|
iTroned
| 2025-04-03T11:41:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T09:20:18Z
|
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: mix_ensemble_super_long_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/fu4rv39a)
# mix_ensemble_super_long_v1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2879
- Accuracy Offensive: 0.9184
- F1 Offensive: 0.9157
- Accuracy Targeted: 0.9226
- F1 Targeted: 0.9006
- Accuracy Stance: 0.8693
- F1 Stance: 0.8271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Offensive | Accuracy Targeted | F1 Targeted | Accuracy Stance | F1 Stance |
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:------------:|:-----------------:|:-----------:|:---------------:|:---------:|
| 0.7637 | 1.0 | 1324 | 0.7555 | 0.6545 | 0.5178 | 0.6545 | 0.5178 | 0.7009 | 0.5777 |
| 0.7299 | 2.0 | 2648 | 0.7125 | 0.6639 | 0.5465 | 0.6556 | 0.5204 | 0.7009 | 0.5777 |
| 0.7024 | 3.0 | 3972 | 0.6648 | 0.6998 | 0.6314 | 0.7100 | 0.6593 | 0.7017 | 0.5813 |
| 0.6657 | 4.0 | 5296 | 0.6314 | 0.7107 | 0.6451 | 0.7368 | 0.6994 | 0.7273 | 0.6532 |
| 0.6458 | 5.0 | 6620 | 0.5971 | 0.7508 | 0.7129 | 0.7704 | 0.7474 | 0.7515 | 0.7056 |
| 0.6336 | 6.0 | 7944 | 0.5819 | 0.7300 | 0.6731 | 0.7749 | 0.7434 | 0.7606 | 0.7057 |
| 0.6016 | 7.0 | 9268 | 0.5498 | 0.7674 | 0.7337 | 0.7980 | 0.7763 | 0.7738 | 0.7302 |
| 0.5853 | 8.0 | 10592 | 0.5281 | 0.7742 | 0.7421 | 0.8172 | 0.7955 | 0.7829 | 0.7404 |
| 0.5675 | 9.0 | 11916 | 0.5150 | 0.7545 | 0.7084 | 0.8229 | 0.7978 | 0.7968 | 0.7485 |
| 0.5497 | 10.0 | 13240 | 0.4831 | 0.8104 | 0.7894 | 0.8501 | 0.8304 | 0.8063 | 0.7673 |
| 0.5315 | 11.0 | 14564 | 0.4642 | 0.7987 | 0.7730 | 0.8550 | 0.8330 | 0.8127 | 0.7684 |
| 0.5342 | 12.0 | 15888 | 0.4416 | 0.8089 | 0.7864 | 0.8693 | 0.8480 | 0.8270 | 0.7840 |
| 0.5177 | 13.0 | 17212 | 0.4280 | 0.8350 | 0.8200 | 0.8784 | 0.8576 | 0.8319 | 0.7903 |
| 0.5035 | 14.0 | 18536 | 0.4040 | 0.8433 | 0.8301 | 0.8920 | 0.8709 | 0.8353 | 0.7950 |
| 0.4983 | 15.0 | 19860 | 0.3904 | 0.8433 | 0.8296 | 0.8999 | 0.8785 | 0.8489 | 0.8059 |
| 0.4837 | 16.0 | 21184 | 0.3985 | 0.8063 | 0.7815 | 0.8890 | 0.8666 | 0.8391 | 0.7926 |
| 0.4844 | 17.0 | 22508 | 0.3625 | 0.8667 | 0.8574 | 0.9082 | 0.8866 | 0.8554 | 0.8127 |
| 0.4691 | 18.0 | 23832 | 0.3616 | 0.8633 | 0.8533 | 0.9060 | 0.8841 | 0.8520 | 0.8082 |
| 0.4541 | 19.0 | 25156 | 0.3479 | 0.8882 | 0.8824 | 0.9116 | 0.8900 | 0.8573 | 0.8156 |
| 0.45 | 20.0 | 26480 | 0.3413 | 0.8682 | 0.8590 | 0.9139 | 0.8919 | 0.8633 | 0.8195 |
| 0.4427 | 21.0 | 27804 | 0.3356 | 0.8939 | 0.8889 | 0.9162 | 0.8945 | 0.8569 | 0.8159 |
| 0.4281 | 22.0 | 29128 | 0.3259 | 0.8705 | 0.8615 | 0.9184 | 0.8963 | 0.8603 | 0.8156 |
| 0.4408 | 23.0 | 30452 | 0.3162 | 0.8901 | 0.8842 | 0.9222 | 0.9001 | 0.8663 | 0.8223 |
| 0.4469 | 24.0 | 31776 | 0.3143 | 0.9128 | 0.9095 | 0.9215 | 0.8997 | 0.8633 | 0.8221 |
| 0.4115 | 25.0 | 33100 | 0.3104 | 0.9196 | 0.9170 | 0.9177 | 0.8960 | 0.8614 | 0.8208 |
| 0.4231 | 26.0 | 34424 | 0.3026 | 0.9154 | 0.9125 | 0.9237 | 0.9017 | 0.8614 | 0.8199 |
| 0.4224 | 27.0 | 35748 | 0.2949 | 0.9094 | 0.9057 | 0.9290 | 0.9068 | 0.8682 | 0.8245 |
| 0.4169 | 28.0 | 37072 | 0.2830 | 0.9248 | 0.9227 | 0.9286 | 0.9065 | 0.8708 | 0.8292 |
| 0.4128 | 29.0 | 38396 | 0.2935 | 0.9222 | 0.9198 | 0.9230 | 0.9010 | 0.8667 | 0.8243 |
| 0.4103 | 30.0 | 39720 | 0.2870 | 0.9267 | 0.9248 | 0.9226 | 0.9007 | 0.8629 | 0.8220 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
mujerry/segformer-b2-finetuned-ade-512-512_necrosis
|
mujerry
| 2025-04-03T11:41:37Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b2-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b2-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2025-04-02T12:25:49Z
|
---
library_name: transformers
license: other
base_model: nvidia/segformer-b2-finetuned-ade-512-512
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b2-finetuned-ade-512-512_necrosis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-finetuned-ade-512-512_necrosis
This model is a fine-tuned version of [nvidia/segformer-b2-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0547
- Mean Iou: 0.8851
- Mean Accuracy: 0.9274
- Overall Accuracy: 0.9826
- Accuracy Background: 0.9941
- Accuracy Necrosis: 0.8203
- Accuracy Root: 0.9678
- Iou Background: 0.9889
- Iou Necrosis: 0.7417
- Iou Root: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Necrosis | Accuracy Root | Iou Background | Iou Necrosis | Iou Root |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:-----------------:|:-------------:|:--------------:|:------------:|:--------:|
| 1.0136 | 0.3125 | 20 | 0.9745 | 0.2835 | 0.5534 | 0.5117 | 0.5703 | 0.8384 | 0.2516 | 0.5531 | 0.0588 | 0.2387 |
| 0.782 | 0.625 | 40 | 0.6546 | 0.6443 | 0.7573 | 0.9244 | 0.9470 | 0.3958 | 0.9292 | 0.9426 | 0.1808 | 0.8096 |
| 0.5646 | 0.9375 | 60 | 0.5035 | 0.6000 | 0.6673 | 0.9352 | 0.9622 | 0.0591 | 0.9807 | 0.9595 | 0.0417 | 0.7987 |
| 0.4075 | 1.25 | 80 | 0.3676 | 0.6185 | 0.6781 | 0.9491 | 0.9802 | 0.0744 | 0.9797 | 0.9743 | 0.0697 | 0.8114 |
| 0.3336 | 1.5625 | 100 | 0.2976 | 0.6525 | 0.7111 | 0.9526 | 0.9793 | 0.1703 | 0.9838 | 0.9751 | 0.1626 | 0.8198 |
| 0.3046 | 1.875 | 120 | 0.2017 | 0.8358 | 0.9058 | 0.9716 | 0.9905 | 0.7937 | 0.9334 | 0.9798 | 0.6453 | 0.8823 |
| 0.1448 | 2.1875 | 140 | 0.1557 | 0.8383 | 0.9006 | 0.9725 | 0.9850 | 0.7537 | 0.9631 | 0.9798 | 0.6465 | 0.8885 |
| 0.1214 | 2.5 | 160 | 0.1194 | 0.8600 | 0.9089 | 0.9773 | 0.9944 | 0.7847 | 0.9475 | 0.9840 | 0.6915 | 0.9044 |
| 0.1044 | 2.8125 | 180 | 0.1037 | 0.8590 | 0.9012 | 0.9779 | 0.9938 | 0.7523 | 0.9575 | 0.9848 | 0.6852 | 0.9069 |
| 0.0875 | 3.125 | 200 | 0.1002 | 0.8520 | 0.8956 | 0.9769 | 0.9906 | 0.7280 | 0.9681 | 0.9844 | 0.6686 | 0.9031 |
| 0.0873 | 3.4375 | 220 | 0.0873 | 0.8574 | 0.8968 | 0.9781 | 0.9919 | 0.7293 | 0.9693 | 0.9853 | 0.6787 | 0.9083 |
| 0.0823 | 3.75 | 240 | 0.0876 | 0.8712 | 0.9292 | 0.9789 | 0.9944 | 0.8486 | 0.9447 | 0.9857 | 0.7185 | 0.9094 |
| 0.0828 | 4.0625 | 260 | 0.0866 | 0.8657 | 0.9290 | 0.9765 | 0.9934 | 0.8578 | 0.9357 | 0.9827 | 0.7143 | 0.9002 |
| 0.0601 | 4.375 | 280 | 0.0774 | 0.8619 | 0.9002 | 0.9787 | 0.9937 | 0.7430 | 0.9638 | 0.9857 | 0.6901 | 0.9100 |
| 0.0734 | 4.6875 | 300 | 0.0746 | 0.8588 | 0.8964 | 0.9787 | 0.9924 | 0.7261 | 0.9708 | 0.9860 | 0.6798 | 0.9106 |
| 0.1485 | 5.0 | 320 | 0.0693 | 0.8774 | 0.9267 | 0.9804 | 0.9938 | 0.8291 | 0.9571 | 0.9866 | 0.7293 | 0.9164 |
| 0.0592 | 5.3125 | 340 | 0.0681 | 0.8739 | 0.9184 | 0.9800 | 0.9927 | 0.7982 | 0.9644 | 0.9862 | 0.7202 | 0.9153 |
| 0.0599 | 5.625 | 360 | 0.0665 | 0.8753 | 0.9207 | 0.9804 | 0.9925 | 0.8039 | 0.9657 | 0.9866 | 0.7224 | 0.9169 |
| 0.0653 | 5.9375 | 380 | 0.0651 | 0.8774 | 0.9304 | 0.9802 | 0.9946 | 0.8461 | 0.9506 | 0.9863 | 0.7301 | 0.9159 |
| 0.0729 | 6.25 | 400 | 0.0635 | 0.8795 | 0.9241 | 0.9812 | 0.9929 | 0.8125 | 0.9670 | 0.9876 | 0.7311 | 0.9197 |
| 0.0713 | 6.5625 | 420 | 0.0653 | 0.8785 | 0.9273 | 0.9802 | 0.9954 | 0.8376 | 0.9490 | 0.9862 | 0.7346 | 0.9147 |
| 0.0584 | 6.875 | 440 | 0.0619 | 0.8772 | 0.9173 | 0.9807 | 0.9943 | 0.7956 | 0.9619 | 0.9866 | 0.7273 | 0.9177 |
| 0.0515 | 7.1875 | 460 | 0.0629 | 0.8644 | 0.9005 | 0.9799 | 0.9933 | 0.7369 | 0.9714 | 0.9871 | 0.6912 | 0.9148 |
| 0.0423 | 7.5 | 480 | 0.0594 | 0.8809 | 0.9237 | 0.9815 | 0.9938 | 0.8119 | 0.9653 | 0.9877 | 0.7337 | 0.9212 |
| 0.0568 | 7.8125 | 500 | 0.0588 | 0.8822 | 0.9369 | 0.9813 | 0.9925 | 0.8564 | 0.9617 | 0.9877 | 0.7387 | 0.9201 |
| 0.0786 | 8.125 | 520 | 0.0587 | 0.8781 | 0.9178 | 0.9814 | 0.9946 | 0.7945 | 0.9644 | 0.9877 | 0.7260 | 0.9205 |
| 0.0475 | 8.4375 | 540 | 0.0643 | 0.8693 | 0.9098 | 0.9796 | 0.9923 | 0.7688 | 0.9683 | 0.9860 | 0.7081 | 0.9137 |
| 0.0556 | 8.75 | 560 | 0.0571 | 0.8738 | 0.9099 | 0.9812 | 0.9948 | 0.7673 | 0.9677 | 0.9880 | 0.7134 | 0.9199 |
| 0.0511 | 9.0625 | 580 | 0.0574 | 0.8786 | 0.9199 | 0.9814 | 0.9923 | 0.7945 | 0.9729 | 0.9878 | 0.7273 | 0.9207 |
| 0.0392 | 9.375 | 600 | 0.0571 | 0.8713 | 0.9074 | 0.9807 | 0.9936 | 0.7576 | 0.9711 | 0.9876 | 0.7088 | 0.9176 |
| 0.0438 | 9.6875 | 620 | 0.0565 | 0.8823 | 0.9326 | 0.9817 | 0.9949 | 0.8461 | 0.9568 | 0.9882 | 0.7374 | 0.9213 |
| 0.157 | 10.0 | 640 | 0.0564 | 0.8829 | 0.9292 | 0.9815 | 0.9944 | 0.8337 | 0.9594 | 0.9877 | 0.7411 | 0.9200 |
| 0.0404 | 10.3125 | 660 | 0.0571 | 0.8814 | 0.9276 | 0.9811 | 0.9957 | 0.8346 | 0.9526 | 0.9870 | 0.7384 | 0.9188 |
| 0.0447 | 10.625 | 680 | 0.0536 | 0.8814 | 0.9250 | 0.9822 | 0.9933 | 0.8113 | 0.9703 | 0.9888 | 0.7316 | 0.9237 |
| 0.0353 | 10.9375 | 700 | 0.0571 | 0.8774 | 0.9162 | 0.9812 | 0.9934 | 0.7857 | 0.9695 | 0.9875 | 0.7250 | 0.9198 |
| 0.0488 | 11.25 | 720 | 0.0574 | 0.8821 | 0.9344 | 0.9811 | 0.9950 | 0.8563 | 0.9520 | 0.9875 | 0.7401 | 0.9186 |
| 0.0444 | 11.5625 | 740 | 0.0595 | 0.8784 | 0.9224 | 0.9792 | 0.9957 | 0.8262 | 0.9454 | 0.9843 | 0.7406 | 0.9104 |
| 0.0452 | 11.875 | 760 | 0.0553 | 0.8806 | 0.9365 | 0.9811 | 0.9957 | 0.8664 | 0.9474 | 0.9878 | 0.7361 | 0.9180 |
| 0.0375 | 12.1875 | 780 | 0.0533 | 0.8812 | 0.9237 | 0.9818 | 0.9918 | 0.8046 | 0.9748 | 0.9881 | 0.7330 | 0.9224 |
| 0.0364 | 12.5 | 800 | 0.0530 | 0.8842 | 0.9276 | 0.9822 | 0.9936 | 0.8217 | 0.9676 | 0.9884 | 0.7405 | 0.9236 |
| 0.031 | 12.8125 | 820 | 0.0542 | 0.8818 | 0.9268 | 0.9815 | 0.9954 | 0.8280 | 0.9571 | 0.9877 | 0.7371 | 0.9206 |
| 0.0322 | 13.125 | 840 | 0.0533 | 0.8841 | 0.9352 | 0.9820 | 0.9939 | 0.8506 | 0.9611 | 0.9886 | 0.7411 | 0.9226 |
| 0.0343 | 13.4375 | 860 | 0.0543 | 0.8817 | 0.9219 | 0.9820 | 0.9942 | 0.8044 | 0.9672 | 0.9883 | 0.7341 | 0.9225 |
| 0.0368 | 13.75 | 880 | 0.0520 | 0.8848 | 0.9308 | 0.9824 | 0.9942 | 0.8334 | 0.9647 | 0.9889 | 0.7410 | 0.9245 |
| 0.0297 | 14.0625 | 900 | 0.0535 | 0.8825 | 0.9256 | 0.9821 | 0.9923 | 0.8111 | 0.9735 | 0.9885 | 0.7355 | 0.9234 |
| 0.0606 | 14.375 | 920 | 0.0538 | 0.8800 | 0.9188 | 0.9819 | 0.9939 | 0.7926 | 0.9699 | 0.9885 | 0.7289 | 0.9225 |
| 0.0429 | 14.6875 | 940 | 0.0535 | 0.8802 | 0.9188 | 0.9823 | 0.9938 | 0.7902 | 0.9724 | 0.9889 | 0.7276 | 0.9241 |
| 0.0692 | 15.0 | 960 | 0.0565 | 0.8813 | 0.9278 | 0.9812 | 0.9898 | 0.8163 | 0.9772 | 0.9873 | 0.7367 | 0.9200 |
| 0.0359 | 15.3125 | 980 | 0.0535 | 0.8832 | 0.9261 | 0.9820 | 0.9954 | 0.8228 | 0.9600 | 0.9882 | 0.7390 | 0.9224 |
| 0.0282 | 15.625 | 1000 | 0.0529 | 0.8838 | 0.9240 | 0.9821 | 0.9958 | 0.8160 | 0.9603 | 0.9882 | 0.7399 | 0.9231 |
| 0.038 | 15.9375 | 1020 | 0.0535 | 0.8808 | 0.9217 | 0.9812 | 0.9946 | 0.8094 | 0.9612 | 0.9872 | 0.7364 | 0.9189 |
| 0.0355 | 16.25 | 1040 | 0.0536 | 0.8822 | 0.9222 | 0.9824 | 0.9946 | 0.8042 | 0.9677 | 0.9888 | 0.7333 | 0.9244 |
| 0.046 | 16.5625 | 1060 | 0.0540 | 0.8831 | 0.9248 | 0.9820 | 0.9919 | 0.8074 | 0.9752 | 0.9883 | 0.7378 | 0.9231 |
| 0.0346 | 16.875 | 1080 | 0.0514 | 0.8851 | 0.9283 | 0.9824 | 0.9937 | 0.8231 | 0.9680 | 0.9886 | 0.7420 | 0.9247 |
| 0.0355 | 17.1875 | 1100 | 0.0523 | 0.8844 | 0.9272 | 0.9823 | 0.9947 | 0.8226 | 0.9641 | 0.9886 | 0.7404 | 0.9241 |
| 0.0317 | 17.5 | 1120 | 0.0517 | 0.8834 | 0.9229 | 0.9826 | 0.9946 | 0.8055 | 0.9686 | 0.9890 | 0.7358 | 0.9253 |
| 0.0489 | 17.8125 | 1140 | 0.0526 | 0.8823 | 0.9213 | 0.9824 | 0.9939 | 0.7990 | 0.9711 | 0.9889 | 0.7333 | 0.9246 |
| 0.0318 | 18.125 | 1160 | 0.0520 | 0.8864 | 0.9314 | 0.9824 | 0.9951 | 0.8384 | 0.9607 | 0.9886 | 0.7464 | 0.9242 |
| 0.0264 | 18.4375 | 1180 | 0.0518 | 0.8853 | 0.9300 | 0.9823 | 0.9946 | 0.8329 | 0.9626 | 0.9885 | 0.7439 | 0.9235 |
| 0.036 | 18.75 | 1200 | 0.0524 | 0.8821 | 0.9200 | 0.9826 | 0.9947 | 0.7958 | 0.9696 | 0.9890 | 0.7320 | 0.9253 |
| 0.0288 | 19.0625 | 1220 | 0.0540 | 0.8794 | 0.9167 | 0.9821 | 0.9933 | 0.7818 | 0.9748 | 0.9888 | 0.7258 | 0.9235 |
| 0.0304 | 19.375 | 1240 | 0.0530 | 0.8833 | 0.9230 | 0.9821 | 0.9955 | 0.8111 | 0.9623 | 0.9883 | 0.7384 | 0.9230 |
| 0.0363 | 19.6875 | 1260 | 0.0530 | 0.8838 | 0.9237 | 0.9823 | 0.9951 | 0.8115 | 0.9644 | 0.9885 | 0.7390 | 0.9238 |
| 0.0371 | 20.0 | 1280 | 0.0518 | 0.8861 | 0.9279 | 0.9828 | 0.9940 | 0.8206 | 0.9692 | 0.9891 | 0.7434 | 0.9259 |
| 0.0253 | 20.3125 | 1300 | 0.0541 | 0.8829 | 0.9226 | 0.9824 | 0.9935 | 0.8023 | 0.9720 | 0.9888 | 0.7356 | 0.9245 |
| 0.0296 | 20.625 | 1320 | 0.0533 | 0.8861 | 0.9321 | 0.9824 | 0.9932 | 0.8351 | 0.9681 | 0.9887 | 0.7454 | 0.9243 |
| 0.0306 | 20.9375 | 1340 | 0.0521 | 0.8842 | 0.9254 | 0.9826 | 0.9936 | 0.8112 | 0.9713 | 0.9891 | 0.7381 | 0.9253 |
| 0.0341 | 21.25 | 1360 | 0.0530 | 0.8828 | 0.9217 | 0.9825 | 0.9939 | 0.8001 | 0.9712 | 0.9889 | 0.7347 | 0.9247 |
| 0.0215 | 21.5625 | 1380 | 0.0537 | 0.8840 | 0.9355 | 0.9817 | 0.9954 | 0.8581 | 0.9529 | 0.9881 | 0.7432 | 0.9206 |
| 0.033 | 21.875 | 1400 | 0.0517 | 0.8868 | 0.9319 | 0.9827 | 0.9944 | 0.8369 | 0.9645 | 0.9890 | 0.7462 | 0.9252 |
| 0.0284 | 22.1875 | 1420 | 0.0530 | 0.8840 | 0.9242 | 0.9825 | 0.9938 | 0.8083 | 0.9706 | 0.9889 | 0.7381 | 0.9249 |
| 0.0238 | 22.5 | 1440 | 0.0518 | 0.8864 | 0.9335 | 0.9826 | 0.9949 | 0.8443 | 0.9613 | 0.9890 | 0.7456 | 0.9247 |
| 0.0222 | 22.8125 | 1460 | 0.0541 | 0.8814 | 0.9211 | 0.9823 | 0.9924 | 0.7942 | 0.9766 | 0.9889 | 0.7314 | 0.9240 |
| 0.0263 | 23.125 | 1480 | 0.0528 | 0.8851 | 0.9273 | 0.9826 | 0.9941 | 0.8200 | 0.9677 | 0.9889 | 0.7414 | 0.9249 |
| 0.0246 | 23.4375 | 1500 | 0.0532 | 0.8858 | 0.9317 | 0.9825 | 0.9935 | 0.8343 | 0.9673 | 0.9889 | 0.7437 | 0.9247 |
| 0.0382 | 23.75 | 1520 | 0.0548 | 0.8835 | 0.9276 | 0.9819 | 0.9913 | 0.8164 | 0.9750 | 0.9881 | 0.7399 | 0.9223 |
| 0.02 | 24.0625 | 1540 | 0.0537 | 0.8845 | 0.9271 | 0.9824 | 0.9926 | 0.8158 | 0.9729 | 0.9887 | 0.7406 | 0.9242 |
| 0.0293 | 24.375 | 1560 | 0.0539 | 0.8854 | 0.9300 | 0.9824 | 0.9927 | 0.8261 | 0.9711 | 0.9887 | 0.7433 | 0.9242 |
| 0.0277 | 24.6875 | 1580 | 0.0533 | 0.8854 | 0.9303 | 0.9824 | 0.9929 | 0.8282 | 0.9698 | 0.9887 | 0.7434 | 0.9241 |
| 0.0225 | 25.0 | 1600 | 0.0534 | 0.8854 | 0.9368 | 0.9823 | 0.9937 | 0.8543 | 0.9625 | 0.9889 | 0.7438 | 0.9235 |
| 0.0349 | 25.3125 | 1620 | 0.0535 | 0.8851 | 0.9260 | 0.9827 | 0.9942 | 0.8153 | 0.9686 | 0.9890 | 0.7411 | 0.9252 |
| 0.0258 | 25.625 | 1640 | 0.0527 | 0.8853 | 0.9279 | 0.9826 | 0.9938 | 0.8212 | 0.9686 | 0.9889 | 0.7423 | 0.9248 |
| 0.033 | 25.9375 | 1660 | 0.0522 | 0.8860 | 0.9312 | 0.9826 | 0.9951 | 0.8368 | 0.9618 | 0.9889 | 0.7445 | 0.9247 |
| 0.0202 | 26.25 | 1680 | 0.0518 | 0.8866 | 0.9307 | 0.9828 | 0.9946 | 0.8325 | 0.9649 | 0.9891 | 0.7453 | 0.9255 |
| 0.0246 | 26.5625 | 1700 | 0.0530 | 0.8863 | 0.9369 | 0.9825 | 0.9936 | 0.8535 | 0.9637 | 0.9890 | 0.7457 | 0.9242 |
| 0.0211 | 26.875 | 1720 | 0.0531 | 0.8859 | 0.9289 | 0.9827 | 0.9938 | 0.8240 | 0.9690 | 0.9892 | 0.7429 | 0.9255 |
| 0.0417 | 27.1875 | 1740 | 0.0525 | 0.8862 | 0.9296 | 0.9828 | 0.9935 | 0.8254 | 0.9700 | 0.9891 | 0.7437 | 0.9257 |
| 0.0392 | 27.5 | 1760 | 0.0522 | 0.8868 | 0.9333 | 0.9828 | 0.9939 | 0.8397 | 0.9662 | 0.9892 | 0.7457 | 0.9256 |
| 0.0248 | 27.8125 | 1780 | 0.0531 | 0.8867 | 0.9329 | 0.9827 | 0.9943 | 0.8399 | 0.9645 | 0.9891 | 0.7461 | 0.9251 |
| 0.0255 | 28.125 | 1800 | 0.0540 | 0.8862 | 0.9329 | 0.9825 | 0.9934 | 0.8381 | 0.9673 | 0.9889 | 0.7449 | 0.9247 |
| 0.0233 | 28.4375 | 1820 | 0.0537 | 0.8858 | 0.9296 | 0.9826 | 0.9931 | 0.8251 | 0.9704 | 0.9889 | 0.7435 | 0.9248 |
| 0.0307 | 28.75 | 1840 | 0.0531 | 0.8865 | 0.9299 | 0.9827 | 0.9944 | 0.8291 | 0.9662 | 0.9891 | 0.7450 | 0.9254 |
| 0.0308 | 29.0625 | 1860 | 0.0536 | 0.8867 | 0.9329 | 0.9827 | 0.9939 | 0.8389 | 0.9660 | 0.9890 | 0.7459 | 0.9251 |
| 0.0259 | 29.375 | 1880 | 0.0540 | 0.8850 | 0.9262 | 0.9825 | 0.9945 | 0.8178 | 0.9664 | 0.9888 | 0.7416 | 0.9245 |
| 0.0295 | 29.6875 | 1900 | 0.0545 | 0.8838 | 0.9244 | 0.9824 | 0.9937 | 0.8093 | 0.9703 | 0.9888 | 0.7382 | 0.9243 |
| 0.0197 | 30.0 | 1920 | 0.0539 | 0.8853 | 0.9285 | 0.9825 | 0.9938 | 0.8235 | 0.9683 | 0.9889 | 0.7425 | 0.9247 |
| 0.0369 | 30.3125 | 1940 | 0.0539 | 0.8846 | 0.9269 | 0.9824 | 0.9942 | 0.8195 | 0.9668 | 0.9888 | 0.7407 | 0.9242 |
| 0.0262 | 30.625 | 1960 | 0.0543 | 0.8849 | 0.9287 | 0.9824 | 0.9936 | 0.8241 | 0.9683 | 0.9889 | 0.7415 | 0.9242 |
| 0.0295 | 30.9375 | 1980 | 0.0547 | 0.8845 | 0.9269 | 0.9825 | 0.9932 | 0.8162 | 0.9714 | 0.9889 | 0.7400 | 0.9246 |
| 0.0247 | 31.25 | 2000 | 0.0550 | 0.8855 | 0.9296 | 0.9824 | 0.9943 | 0.8296 | 0.9649 | 0.9887 | 0.7440 | 0.9239 |
| 0.0283 | 31.5625 | 2020 | 0.0552 | 0.8828 | 0.9222 | 0.9823 | 0.9939 | 0.8023 | 0.9705 | 0.9888 | 0.7358 | 0.9240 |
| 0.0333 | 31.875 | 2040 | 0.0543 | 0.8857 | 0.9303 | 0.9825 | 0.9940 | 0.8308 | 0.9660 | 0.9888 | 0.7439 | 0.9244 |
| 0.0256 | 32.1875 | 2060 | 0.0540 | 0.8860 | 0.9365 | 0.9824 | 0.9941 | 0.8535 | 0.9617 | 0.9890 | 0.7450 | 0.9239 |
| 0.0237 | 32.5 | 2080 | 0.0539 | 0.8846 | 0.9241 | 0.9827 | 0.9943 | 0.8083 | 0.9697 | 0.9891 | 0.7390 | 0.9256 |
| 0.0236 | 32.8125 | 2100 | 0.0537 | 0.8855 | 0.9276 | 0.9827 | 0.9937 | 0.8187 | 0.9703 | 0.9891 | 0.7417 | 0.9256 |
| 0.0238 | 33.125 | 2120 | 0.0539 | 0.8849 | 0.9265 | 0.9825 | 0.9947 | 0.8191 | 0.9659 | 0.9889 | 0.7409 | 0.9248 |
| 0.0265 | 33.4375 | 2140 | 0.0543 | 0.8858 | 0.9316 | 0.9825 | 0.9938 | 0.8344 | 0.9664 | 0.9889 | 0.7438 | 0.9246 |
| 0.0274 | 33.75 | 2160 | 0.0555 | 0.8826 | 0.9225 | 0.9824 | 0.9939 | 0.8029 | 0.9706 | 0.9890 | 0.7344 | 0.9245 |
| 0.0232 | 34.0625 | 2180 | 0.0543 | 0.8857 | 0.9316 | 0.9826 | 0.9935 | 0.8336 | 0.9677 | 0.9890 | 0.7434 | 0.9248 |
| 0.0276 | 34.375 | 2200 | 0.0547 | 0.8838 | 0.9240 | 0.9826 | 0.9941 | 0.8082 | 0.9697 | 0.9891 | 0.7373 | 0.9251 |
| 0.033 | 34.6875 | 2220 | 0.0538 | 0.8851 | 0.9267 | 0.9826 | 0.9948 | 0.8198 | 0.9657 | 0.9890 | 0.7413 | 0.9251 |
| 0.0333 | 35.0 | 2240 | 0.0540 | 0.8857 | 0.9291 | 0.9827 | 0.9937 | 0.8247 | 0.9690 | 0.9891 | 0.7426 | 0.9254 |
| 0.0221 | 35.3125 | 2260 | 0.0545 | 0.8856 | 0.9291 | 0.9826 | 0.9941 | 0.8260 | 0.9674 | 0.9891 | 0.7426 | 0.9251 |
| 0.0286 | 35.625 | 2280 | 0.0549 | 0.8852 | 0.9292 | 0.9824 | 0.9940 | 0.8275 | 0.9661 | 0.9887 | 0.7428 | 0.9240 |
| 0.0231 | 35.9375 | 2300 | 0.0545 | 0.8855 | 0.9288 | 0.9826 | 0.9941 | 0.8251 | 0.9673 | 0.9890 | 0.7425 | 0.9250 |
| 0.0301 | 36.25 | 2320 | 0.0544 | 0.8853 | 0.9284 | 0.9825 | 0.9946 | 0.8258 | 0.9650 | 0.9888 | 0.7425 | 0.9245 |
| 0.0311 | 36.5625 | 2340 | 0.0545 | 0.8853 | 0.9289 | 0.9826 | 0.9937 | 0.8245 | 0.9685 | 0.9889 | 0.7422 | 0.9248 |
| 0.0231 | 36.875 | 2360 | 0.0548 | 0.8854 | 0.9284 | 0.9825 | 0.9945 | 0.8257 | 0.9650 | 0.9888 | 0.7430 | 0.9243 |
| 0.0187 | 37.1875 | 2380 | 0.0548 | 0.8859 | 0.9313 | 0.9826 | 0.9941 | 0.8342 | 0.9656 | 0.9890 | 0.7441 | 0.9247 |
| 0.0355 | 37.5 | 2400 | 0.0550 | 0.8846 | 0.9261 | 0.9825 | 0.9945 | 0.8173 | 0.9665 | 0.9889 | 0.7405 | 0.9244 |
| 0.021 | 37.8125 | 2420 | 0.0547 | 0.8857 | 0.9300 | 0.9825 | 0.9940 | 0.8295 | 0.9664 | 0.9889 | 0.7436 | 0.9246 |
| 0.0274 | 38.125 | 2440 | 0.0545 | 0.8854 | 0.9285 | 0.9826 | 0.9940 | 0.8240 | 0.9676 | 0.9890 | 0.7423 | 0.9249 |
| 0.0288 | 38.4375 | 2460 | 0.0545 | 0.8849 | 0.9270 | 0.9826 | 0.9941 | 0.8188 | 0.9682 | 0.9890 | 0.7408 | 0.9250 |
| 0.0315 | 38.75 | 2480 | 0.0548 | 0.8847 | 0.9260 | 0.9826 | 0.9942 | 0.8158 | 0.9681 | 0.9890 | 0.7404 | 0.9248 |
| 0.0221 | 39.0625 | 2500 | 0.0550 | 0.8858 | 0.9295 | 0.9826 | 0.9941 | 0.8276 | 0.9668 | 0.9890 | 0.7435 | 0.9248 |
| 0.021 | 39.375 | 2520 | 0.0552 | 0.8855 | 0.9290 | 0.9826 | 0.9940 | 0.8255 | 0.9674 | 0.9889 | 0.7429 | 0.9248 |
| 0.0261 | 39.6875 | 2540 | 0.0544 | 0.8852 | 0.9274 | 0.9826 | 0.9942 | 0.8208 | 0.9673 | 0.9889 | 0.7419 | 0.9248 |
| 0.0152 | 40.0 | 2560 | 0.0547 | 0.8851 | 0.9274 | 0.9826 | 0.9941 | 0.8203 | 0.9678 | 0.9889 | 0.7417 | 0.9247 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
mustafasalfiti/model_llama-3.2-1b-finetuned
|
mustafasalfiti
| 2025-04-03T11:41:32Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-02T10:10:16Z
|
---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mustafasalfiti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/justinbibresx
|
LHRuig
| 2025-04-03T11:41:06Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:40:24Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: justinbibresx
---
# justinbibresx
<Gallery />
## Model description
justinbibresx lora
## Trigger words
You should use `justinbibresx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/justinbibresx/tree/main) them in the Files & versions tab.
|
1Artur1/fitmisia
|
1Artur1
| 2025-04-03T11:40:58Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-03T10:17:47Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: FITMISIA
---
# Fitmisia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `FITMISIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "FITMISIA",
"lora_weights": "https://huggingface.co/1Artur1/fitmisia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('1Artur1/fitmisia', weight_name='lora.safetensors')
image = pipeline('FITMISIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 128
## Contribute your own examples
You can use the [community tab](https://huggingface.co/1Artur1/fitmisia/discussions) to add images that show off what you’ve made with this LoRA.
|
Skyfallirk/gary_bant_LoRa
|
Skyfallirk
| 2025-04-03T11:39:04Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-04-03T11:38:59Z
|
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a paint in Gary Bant style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Skyfallirk/gary_bant_LoRa
<Gallery />
## Model description
These are Skyfallirk/gary_bant_LoRa LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a paint in Gary Bant style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Skyfallirk/gary_bant_LoRa/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
LHRuig/justinbibrbsx
|
LHRuig
| 2025-04-03T11:38:54Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-04-03T11:38:22Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: justinbibrbsx
---
# justinbibrbsx
<Gallery />
## Model description
justinbibrbsx lora
## Trigger words
You should use `justinbibrbsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/justinbibrbsx/tree/main) them in the Files & versions tab.
|
hZzy/qwen2.5-0.5b-expo-L2EXPO-25-5
|
hZzy
| 2025-04-03T11:37:43Z
| 2
| 0
| null |
[
"safetensors",
"qwen2",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/train_pairwise_all_new4",
"base_model:hZzy/qwen2.5-0.5b-sft3-25-2",
"base_model:finetune:hZzy/qwen2.5-0.5b-sft3-25-2",
"license:apache-2.0",
"region:us"
] | null | 2025-03-07T08:18:12Z
|
---
license: apache-2.0
base_model: hZzy/qwen2.5-0.5b-sft3-25-2
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
- trl
- expo
- generated_from_trainer
datasets:
- hZzy/train_pairwise_all_new4
model-index:
- name: qwen2.5-0.5b-expo-L2EXPO-25-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/io11gyc9)
# qwen2.5-0.5b-expo-L2EXPO-25-5
This model is a fine-tuned version of [hZzy/qwen2.5-0.5b-sft3-25-2](https://huggingface.co/hZzy/qwen2.5-0.5b-sft3-25-2) on the hZzy/train_pairwise_all_new4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4923
- Objective: 0.4887
- Reward Accuracy: 0.6074
- Logp Accuracy: 0.5755
- Log Diff Policy: 8.2386
- Chosen Logps: -164.2356
- Rejected Logps: -172.4742
- Chosen Rewards: -0.7676
- Rejected Rewards: -0.8467
- Logits: -2.1557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 12
- total_train_batch_size: 288
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Reward Accuracy | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Chosen Rewards | Rejected Rewards | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:---------------:|:-------------:|:---------------:|:------------:|:--------------:|:--------------:|:----------------:|:-------:|
| 0.4966 | 0.1577 | 50 | 0.5072 | 0.4997 | 0.5419 | 0.5246 | 1.3296 | -94.5993 | -95.9289 | -0.0712 | -0.0813 | -1.2757 |
| 0.4916 | 0.3154 | 100 | 0.4996 | 0.4914 | 0.5923 | 0.5459 | 2.6241 | -103.4037 | -106.0279 | -0.1593 | -0.1823 | -1.3990 |
| 0.495 | 0.4731 | 150 | 0.4911 | 0.4846 | 0.5917 | 0.5643 | 3.8009 | -118.6434 | -122.4443 | -0.3117 | -0.3464 | -1.4872 |
| 0.4515 | 0.6307 | 200 | 0.4857 | 0.4794 | 0.6147 | 0.5794 | 4.8895 | -128.3570 | -133.2465 | -0.4088 | -0.4545 | -1.6300 |
| 0.4525 | 0.7884 | 250 | 0.4853 | 0.4768 | 0.6191 | 0.5817 | 5.7732 | -127.3466 | -133.1198 | -0.3987 | -0.4532 | -1.8956 |
| 0.4265 | 0.9461 | 300 | 0.4800 | 0.4722 | 0.6208 | 0.5906 | 6.1759 | -134.5628 | -140.7387 | -0.4709 | -0.5294 | -1.8486 |
| 0.3982 | 1.1038 | 350 | 0.4826 | 0.4742 | 0.6152 | 0.5783 | 7.0062 | -142.1399 | -149.1461 | -0.5466 | -0.6135 | -1.8858 |
| 0.4035 | 1.2615 | 400 | 0.4837 | 0.4743 | 0.6152 | 0.5923 | 7.3228 | -147.7389 | -155.0617 | -0.6026 | -0.6726 | -1.9345 |
| 0.3797 | 1.4192 | 450 | 0.4862 | 0.4791 | 0.6091 | 0.5845 | 7.2548 | -148.0394 | -155.2942 | -0.6056 | -0.6749 | -2.0149 |
| 0.3863 | 1.5769 | 500 | 0.4864 | 0.4776 | 0.6163 | 0.5789 | 7.9205 | -150.3136 | -158.2340 | -0.6284 | -0.7043 | -2.0393 |
| 0.3587 | 1.7346 | 550 | 0.4872 | 0.4820 | 0.6102 | 0.5811 | 7.6852 | -150.1711 | -157.8564 | -0.6270 | -0.7006 | -2.1184 |
| 0.3436 | 1.8922 | 600 | 0.4934 | 0.4904 | 0.6074 | 0.5822 | 7.9098 | -162.3326 | -170.2424 | -0.7486 | -0.8244 | -2.1839 |
### Framework versions
- Transformers 4.42.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
|
braindao/DeepSeek-R1-Distill-Qwen-7B-Blunt
|
braindao
| 2025-04-03T11:34:32Z
| 21
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-20T03:29:05Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xw17/Phi-3-mini-4k-instruct_finetuned_4_def_lora3
|
xw17
| 2025-04-03T11:34:31Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:34:23Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bharustak/brexit_xlm_roberta
|
bharustak
| 2025-04-03T11:31:20Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-03T11:30:26Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
justmalhar/fluent-dev-8b_unsloth_finetune
|
justmalhar
| 2025-04-03T11:28:57Z
| 0
| 0
|
transformers
|
[
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-03T11:22:01Z
|
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** justmalhar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alexeyGod/jjjjjiuiui
|
alexeyGod
| 2025-04-03T11:25:28Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-04-03T11:07:57Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/21151089.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# ytyt
<Gallery />
## Model description

## Download model
Weights for this model are available in Safetensors format.
[Download](/alexeyGod/jjjjjiuiui/tree/main) them in the Files & versions tab.
|
DiTy/cross-encoder-russian-msmarco
|
DiTy
| 2025-04-03T11:25:25Z
| 288,732
| 13
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"text-classification",
"transformers",
"rubert",
"cross-encoder",
"reranker",
"msmarco",
"text-ranking",
"ru",
"dataset:unicamp-dl/mmarco",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"license:mit",
"region:us"
] |
text-ranking
| 2024-04-19T15:24:56Z
|
---
language:
- ru
library_name: sentence-transformers
tags:
- sentence-transformers
- text-classification
- transformers
- rubert
- cross-encoder
- reranker
- msmarco
datasets:
- unicamp-dl/mmarco
base_model: DeepPavlov/rubert-base-cased
widget:
- text: как часто нужно ходить к стоматологу? [SEP] Дядя Женя работает врачем стоматологом.
example_title: Example 1
- text: как часто нужно ходить к стоматологу? [SEP] Минимальный обязательный срок
посещения зубного врача – раз в год, но специалисты рекомендуют делать это чаще
– раз в полгода, а ещё лучше – раз в квартал. При таком сроке легко отследить
любые начинающиеся проблемы и исправить их сразу же.
example_title: Example 2
license: mit
pipeline_tag: text-ranking
---
# DiTy/cross-encoder-russian-msmarco
This is a [sentence-transformers](https://www.SBERT.net) model based on a pre-trained [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) and finetuned with [MS-MARCO Russian passage ranking dataset](https://huggingface.co/datasets/unicamp-dl/mmarco).
The model can be used for Information Retrieval in the Russian language: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import CrossEncoder
reranker_model = CrossEncoder('DiTy/cross-encoder-russian-msmarco', max_length=512, device='cuda')
query = ["как часто нужно ходить к стоматологу?"]
documents = [
"Минимальный обязательный срок посещения зубного врача – раз в год, но специалисты рекомендуют делать это чаще – раз в полгода, а ещё лучше – раз в квартал. При таком сроке легко отследить любые начинающиеся проблемы и исправить их сразу же.",
"Основная причина заключается в истончении поверхностного слоя зуба — эмали, которая защищает зуб от механических, химических и температурных воздействий. Под эмалью расположен дентин, который более мягкий по своей структуре и пронизан множеством канальцев. При повреждении эмали происходит оголение дентинных канальцев. Раздражение с них начинает передаваться на нервные окончания в зубе и возникают болевые ощущения. Чаще всего дентин оголяется в придесневой области зубов, поскольку эмаль там наиболее тонкая и стирается быстрее.",
"Стоматолог, также известный как стоматолог-хирург, является медицинским работником, который специализируется на стоматологии, отрасли медицины, специализирующейся на зубах, деснах и полости рта.",
"Дядя Женя работает врачем стоматологом",
"Плоды малины употребляют как свежими, так и замороженными или используют для приготовления варенья, желе, мармелада, соков, а также ягодного пюре. Малиновые вина, наливки, настойки, ликёры обладают высокими вкусовыми качествами.",
]
predict_result = reranker_model.predict([[query[0], documents[0]]])
print(predict_result)
# `array([0.88126713], dtype=float32)`
rank_result = reranker_model.rank(query[0], documents)
print(rank_result)
# `[{'corpus_id': 0, 'score': 0.88126713},
# {'corpus_id': 2, 'score': 0.001042091},
# {'corpus_id': 3, 'score': 0.0010417715},
# {'corpus_id': 1, 'score': 0.0010344835},
# {'corpus_id': 4, 'score': 0.0010244923}]`
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you need to get the logits from the model.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('DiTy/cross-encoder-russian-msmarco')
tokenizer = AutoTokenizer.from_pretrained('DiTy/cross-encoder-russian-msmarco')
features = tokenizer(["как часто нужно ходить к стоматологу?", "как часто нужно ходить к стоматологу?"], ["Минимальный обязательный срок посещения зубного врача – раз в год, но специалисты рекомендуют делать это чаще – раз в полгода, а ещё лучше – раз в квартал. При таком сроке легко отследить любые начинающиеся проблемы и исправить их сразу же.", "Дядя Женя работает врачем стоматологом"], padding=True, truncation=True, return_tensors='pt')
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
# `tensor([[ 1.6871],
# [-6.8700]])`
```
|
ChandrilBasu/Mahi
|
ChandrilBasu
| 2025-04-03T11:24:09Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-03T11:24:02Z
|
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Mahi
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Mahi
<Gallery />
## Model description
## Trigger words
You should use `Mahi` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ChandrilBasu/Mahi/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
amixh/sentence-embedding-model-InLegalBERT-2
|
amixh
| 2025-04-03T11:23:37Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1788",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:law-ai/InLegalBERT",
"base_model:finetune:law-ai/InLegalBERT",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-04-03T11:23:11Z
|
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1788
- loss:TripletLoss
base_model: law-ai/InLegalBERT
widget:
- source_sentence: '[IPC_SECTION_351] According to Whoever makes any gesture, or any
preparation intending or knowing it to be likely that such gesture or preparation
will cause any person present to apprehend that he who makes that gesture or preparation
is about to use criminal force to that person, is said to commit an assault. IPC
351 in Simple Words they are considered to have committed an assault.'
sentences:
- '[CRPC_SECTION_162] Section 162, No statement made by any person to a police officer
in the course of an investigation under this Chapter, shall, if reduced to writing,
be signed by the person making it; nor shall any such statement or any record
thereof, whether in a police diary or otherwise, or any part of such statement
or record, be used for any purpose, save as hereinafter provided, at any inquiry
or trial in respect of any offence under investigation at the time when such statement
was made; Provided that when any witness is called for the prosecution in such
inquiry or trial whose statement has been reduced into writing as aforesaid, any
part of his statement, if duly proved, may be used by the accused, and with the
permission of the Court, by the prosecution, to contradict such witness in the
manner provided by section 145 of the , 1872 (1 of 1872); and when any part of
such statement is so used, any part thereof may also be used in the re-examination
of such witness, but for the purpose only of explaining any matter referred to
in his cross-examination. Nothing in this section shall be deemed to apply to
any statement falling within the provisions of clause (1) of section 32 of the
, 1872 (1 of 1872), or to affect the provisions of section 27 of that Act.'
- Section 446A, Without prejudice to the provisions of section 446, where a bond
under this Code is for appearance of a person in a case and it is forfeited for
breach of a condition— the bond executed by such person as well as the bond, if
any, executed by one or more of his sureties in that case shall stand cancelled;
and thereafter no such person shall be released only on his own bond in that case,
if the Police Officer or the Court, as the case may be, for appearance before
whom the bond was executed, is satisfied that there was no sufficient cause for
the failure of the person bound by the bond to comply with its condition; Provided
that subject to any other provision of this Code he may be released in that case
upon the execution of a fresh personal bond for such sum of money and bond by
one or more of such sureties as the Police Officer or the Court, as the case may
be, thinks sufficient.
- According to Whoever makes any gesture, or any preparation intending or knowing
it to be likely that such gesture or preparation will cause any person present
to apprehend that he who makes that gesture or preparation is about to use criminal
force to that person, is said to commit an assault. IPC 351 in Simple Words they
are considered to have committed an assault.
- source_sentence: '[NIA_SECTION_71] Section 71, If the maker, drawee or acceptor
of a negotiable instrument has no known place of business or fixed residence,
and no place is specified in the instrument for presentment for acceptance or
payment, such presentment may be made to him in person wherever be can be found.'
sentences:
- Section 123, Whenever the District Magistrate in the case of an order passed by
an Executive Magistrate under section 117, or the Chief Judicial Magistrate in
any other case is of opinion that any person imprisoned for failing to give security
under this Chapter may be released without hazard to the community or to any other
person, he may order such person to be discharged. Whenever any person has been
imprisoned for failing to give security under this Chapter, the High Court or
Court of Session, or, where the order was made by any other Court, the District
Magistrate, in the case of an order passed by an Executive Magistrate under section
117, or the Chief Judicial Magistrate in any other case, may make an order reducing
the amount of the security or the number of sureties or the time for which security
has been required. An order under Sub-Section (1) may direct the discharge of
such person either without conditions or upon any conditions which such person
accepts; Provided that any condition imposed shall cease to be operative when
the period for which such person was ordered to give security has expired. The
State Government may prescribe the conditions upon which a conditional discharge
may be made. If any condition upon which any person has been discharged is, in
the opinion of the District Magistrate, in the case of an order passed by an Executive
Magistrate under section 117, or the Chief Judicial Magistrate in any other case
by whom the order of discharge was made or of his successor, not fulfilled, he
may cancel the same. When a conditional order of discharge has been cancelled
under Sub-Section (5), such person may be arrested by any police officer without
warrant, and shall thereupon be produced before the District Magistrate, in the
case of an order passed by an Executive Magistrate under section 117, or the Chief
Judicial Magistrate in any other case. Unless such person gives security in accordance
with the terms of the original order for the unexpired portion of the term for
which he was in the first instance committed or ordered to be detained (such portion
being deemed to be a period equal to the period between the date of the breach
of the conditions of discharge and the date on which, except for such conditional
discharge, he would have been entitled to release), the District Magistrate, in
the case of an order passed by an Executive Magistrate under section 117, or the
Chief Judicial Magistrate in any other case may remand such person to prison to
undergo such unexpired portion. A person remanded to prison under Sub-Section
(7) shall, subject to the provisions of section 122, be released at any lime on
giving security in accordance with the terms of the original order for the unexpired
portion aforesaid to the Court or Magistrate by whom such order was made, or to
its or his successor. The High Court or Court of Sessions may at any time, for
sufficient reasons to be recorded in writing, cancel any bond for keeping the
peace or for good behaviour executed under this Chapter by any order made by it,
and the District Magistrate, in the case of an order passed by an Executive Magistrate
under section 117, or the Chief Judicial Magistrate in any other case may make
such cancellation where such bond was executed under his order or under the order
of any other Court in his district. Any surety for the peaceable conduct or good
behaviour of another person, ordered to execute a bond under this Chapter may
at any time apply to the Court making such order to cancel the bond and on such
application being made, the Court shall issue a summons or warrant, as it thinks
fit, requiring the person for whom such surety is bound to appear or to be brought
before it.
- Section 71, If the maker, drawee or acceptor of a negotiable instrument has no
known place of business or fixed residence, and no place is specified in the instrument
for presentment for acceptance or payment, such presentment may be made to him
in person wherever be can be found.
- '[NIA_SECTION_121] Section 121, No maker of a promissory note and no acceptor
of a bill of exchange payable to order shall, in a suit thereon by a holder in
due course, be permitted to deny the payee’s capacity, at the date of the note
or bill, to indorse the same.'
- source_sentence: '[IPC_SECTION_343] According to Whoever wrongfully confines any
person for three days or more, shall be punished with imprisonment of either description
for a term which may extend to two years, or with fine, or with both. IPC 343
in Simple Words or a fine, or both.'
sentences:
- D, D According to section 354D of , (1) Any man who— follows a woman and contacts,
or attempts to contact such woman to foster personal interaction repeatedly despite
a clear indication of disinterest by such woman; or monitors the use by a woman
of the internet, email or any other form of electronic communication, commits
the offence of stalking; Provided that such conduct shall not amount to stalking
if the man who pursued it proves that— it was pursued for the purpose of preventing
or detecting crime and the man accused of stalking had been entrusted with the
responsibility of prevention and detection of crime by the State; or it was pursued
under any law or to comply with any condition or requirement imposed by any person
under any law; or in the particular circumstances such conduct was reasonable
and justified. (2) Whoever commits the offence of stalking shall be punished on
first conviction with imprisonment of either description for a term which may
extend to three years, and shall also be liable to fine; and be punished on a
second or subsequent conviction, with imprisonment of either description for a
term which may extend to five years, and shall also be liable to fine. IPC 354D
in Simple Words According to section 354D of the , any man who repeatedly follows,
contacts, or monitors a woman's electronic communications despite her clear disinterest
commits the offence of stalking and can be imprisoned for up to three years on
first conviction and up to five years on subsequent convictions, along with a
fine. However, certain justifiable circumstances may not be considered stalking.
- '[CONSTITUTION_ARTICLE_173] Qualification for membership of the State Legislature
A person shall not be qualified to be chosen to fill a seat in the Legislature
of a State unless he (a) is a citizen of India, and makes and subscribes before
some person authorised in that behalf by the Election Commission an oath or affirmation
according to the form set out for the purpose in the Third Schedule; (b) is, in
the case of a seat in the Legislative Assembly, not less than twenty five years
of age and in the case of a seat in the Legislative Council, not less than thirty
years of age; and (c) possesses such other qualifications as may be prescribed
in that behalf by or under any law made by Parliament'
- According to Whoever wrongfully confines any person for three days or more, shall
be punished with imprisonment of either description for a term which may extend
to two years, or with fine, or with both. IPC 343 in Simple Words or a fine, or
both.
- source_sentence: '[CPC_SECTION_82] Section 82, 1[(I) Where, in a suit by or against
the Government or by or against a public officer in respect of any act purporting
to be done by him in his official capacity, a decree is passed against the Union
of India or a State or, as the case may be, the public officer, such decree shall
not be executed except in accordance with the provisions of sub-section (2).]
(2) Execution shall not be issued on any such decree unless it remains unsatisfied
for the period of three months computed from the date of 2 [such decree.] 3[(3)
The provisions of sub-sections (1) and (2) shall apply in relation to an order
or award as they apply in relation to a decree, if the order or award — (a) is
passed or made against 4 [the Union of India or a State or a public officer in
respect of any such act as aforesaid, whether by a Court or by any other authority;
and (b) is capable of being executed under the provisions of this Code or of any
other law for the time being in force as if it were a decree.]'
sentences:
- Section 82, 1 (2) Execution shall not be issued on any such decree unless it remains
unsatisfied for the period of three months computed from the date of 2 3
- Section 131, No one shall be compelled to produce documents in his possession
or electronic records under his control, which any other person would be entitled
to refuse to produce if they were in his possession or control, unless such last-mentioned
person consents to their production.
- '[CONSTITUTION_ARTICLE_93] The Speaker and Deputy Speaker of the House of the
People The House of the People shall, as soon as may be, choose two members of
the House to be respectively Speaker and Deputy Speaker thereof and, so often
as the office of Speaker or Deputy Speaker becomes vacant, the House shall choose
another member to be Speaker or Deputy Speaker, as the case may be'
- source_sentence: '[CONSTITUTION_ARTICLE_252] Power of Parliament to legislate for
two or more States by consent and adoption of such legislation by any other State
(1) If it appears to the Legislatures of two or more States to be desirable that
any of the matters with respect to which Parliament has no power to make laws
for the States except as provided in Articles 249 and 250 should be regulated
in such States by Parliament by law, and if resolutions to that effect are passed
by all the House of the Legislatures of those States, it shall be lawful for Parliament
to pass an Act for regulating that matter accordingly, and any Act so passed shall
apply to such States and to any other State by which it is adopted afterwards
by resolution passed in that behalf by the House or, where there are two Houses,
by each of the Houses of the Legislature of that State (2) Any Act so passed by
Parliament may be amended or repealed by an Act of Parliament passed or adopted
in like manner but shall not, as respects any State to which it applies, be amended
or repealed by an Act of the Legislature of that State'
sentences:
- Section 9, Facts necessary to explain or introduce a fact in issue or relevant
fact, or which support or rebut an inference suggested by a fact in issue or relevant
fact, or which establish the identity of any thing or person whose identity is
relevant, or fix the time or place at which any fact in issue or relevant fact
happened, or which show the relation of parties by whom any such fact was transacted,
are relevant in so far as they are necessary for that purpose.
- Power of Parliament to legislate for two or more States by consent and adoption
of such legislation by any other State (1) If it appears to the Legislatures of
two or more States to be desirable that any of the matters with respect to which
Parliament has no power to make laws for the States except as provided in Articles
249 and 250 should be regulated in such States by Parliament by law, and if resolutions
to that effect are passed by all the House of the Legislatures of those States,
it shall be lawful for Parliament to pass an Act for regulating that matter accordingly,
and any Act so passed shall apply to such States and to any other State by which
it is adopted afterwards by resolution passed in that behalf by the House or,
where there are two Houses, by each of the Houses of the Legislature of that State
(2) Any Act so passed by Parliament may be amended or repealed by an Act of Parliament
passed or adopted in like manner but shall not, as respects any State to which
it applies, be amended or repealed by an Act of the Legislature of that State
- '[CRPC_SECTION_206] Section 206, If, in the opinion of a Magistrate taking cognizance
of a petty offence, the case may be summarily disposed of under section 260 or
section 261, the Magistrate shall, except where he is, for reasons to be recorded
in writing of a contrary opinion, issue summons to the accused requiring him either
to appear in person or by pleader before the Magistrate on a specified date, or
if he desires to plead guilty to the charge without appearing before the Magistrate,
to transmit before the specified date, by post or by messenger to the Magistrate,
the said plea in writing and the amount of fine specified in the summons or if
he desires to appear by pleader and to plead guilty to the charge through such
pleader, to authorise, in writing, the pleader to plead guilty to the charge on
his behalf and to pay the fine through such pleader; Provided that the amount
of the fine specified in such summons shall not exceed one thousand rupees. For
the purposes of this section, “petty offence” means any offence punishable only
with fine not exceeding one thousand rupees, but does not include any offence
so punishable under the Motor Vehicles Act, 1931, or under any other law which
provides for convicting the accused person in his absence on a plea of guilty.
The State Government may, by notification, specially empower any Magistrate to
exercise the powers conferred by Sub-Section (1) in relation to any offence which
is compoundable under section 320 or any offence punishable with imprisonment
for a term not exceeding three months, or with fine or with both where the Magistrate
is of opinion that, having regard to the facts and circumstances of the case,
the imposition of fine only would meet the ends of justice.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on law-ai/InLegalBERT
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [law-ai/InLegalBERT](https://huggingface.co/law-ai/InLegalBERT). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [law-ai/InLegalBERT](https://huggingface.co/law-ai/InLegalBERT) <!-- at revision b5ecfed8ed6cf9d25a3cb8225a8c52f161f7401a -->
- **Maximum Sequence Length:** 320 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 320, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("amixh/sentence-embedding-model-InLegalBERT-2")
# Run inference
sentences = [
'[CONSTITUTION_ARTICLE_252] Power of Parliament to legislate for two or more States by consent and adoption of such legislation by any other State (1) If it appears to the Legislatures of two or more States to be desirable that any of the matters with respect to which Parliament has no power to make laws for the States except as provided in Articles 249 and 250 should be regulated in such States by Parliament by law, and if resolutions to that effect are passed by all the House of the Legislatures of those States, it shall be lawful for Parliament to pass an Act for regulating that matter accordingly, and any Act so passed shall apply to such States and to any other State by which it is adopted afterwards by resolution passed in that behalf by the House or, where there are two Houses, by each of the Houses of the Legislature of that State (2) Any Act so passed by Parliament may be amended or repealed by an Act of Parliament passed or adopted in like manner but shall not, as respects any State to which it applies, be amended or repealed by an Act of the Legislature of that State',
'Power of Parliament to legislate for two or more States by consent and adoption of such legislation by any other State (1) If it appears to the Legislatures of two or more States to be desirable that any of the matters with respect to which Parliament has no power to make laws for the States except as provided in Articles 249 and 250 should be regulated in such States by Parliament by law, and if resolutions to that effect are passed by all the House of the Legislatures of those States, it shall be lawful for Parliament to pass an Act for regulating that matter accordingly, and any Act so passed shall apply to such States and to any other State by which it is adopted afterwards by resolution passed in that behalf by the House or, where there are two Houses, by each of the Houses of the Legislature of that State (2) Any Act so passed by Parliament may be amended or repealed by an Act of Parliament passed or adopted in like manner but shall not, as respects any State to which it applies, be amended or repealed by an Act of the Legislature of that State',
'[CRPC_SECTION_206] Section 206, If, in the opinion of a Magistrate taking cognizance of a petty offence, the case may be summarily disposed of under section 260 or section 261, the Magistrate shall, except where he is, for reasons to be recorded in writing of a contrary opinion, issue summons to the accused requiring him either to appear in person or by pleader before the Magistrate on a specified date, or if he desires to plead guilty to the charge without appearing before the Magistrate, to transmit before the specified date, by post or by messenger to the Magistrate, the said plea in writing and the amount of fine specified in the summons or if he desires to appear by pleader and to plead guilty to the charge through such pleader, to authorise, in writing, the pleader to plead guilty to the charge on his behalf and to pay the fine through such pleader; Provided that the amount of the fine specified in such summons shall not exceed one thousand rupees. For the purposes of this section, “petty offence” means any offence punishable only with fine not exceeding one thousand rupees, but does not include any offence so punishable under the Motor Vehicles Act, 1931, or under any other law which provides for convicting the accused person in his absence on a plea of guilty. The State Government may, by notification, specially empower any Magistrate to exercise the powers conferred by Sub-Section (1) in relation to any offence which is compoundable under section 320 or any offence punishable with imprisonment for a term not exceeding three months, or with fine or with both where the Magistrate is of opinion that, having regard to the facts and circumstances of the case, the imposition of fine only would meet the ends of justice.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,788 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 138.36 tokens</li><li>max: 320 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 130.74 tokens</li><li>max: 320 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 138.37 tokens</li><li>max: 320 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>[IPC_SECTION_395] According to Whoever commits dacoity shall be punished with imprisonment for life, or with rigorous imprisonment for a term which may extend to ten years, and shall also be liable to fine. IPC 395 in Simple Words Whoever commits dacoity shall be punished with either life imprisonment or rigorous imprisonment up to ten years, and may also face a fine.</code> | <code>According to Whoever commits dacoity shall be punished with imprisonment for life, or with rigorous imprisonment for a term which may extend to ten years, and shall also be liable to fine. IPC 395 in Simple Words Whoever commits dacoity shall be punished with either life imprisonment or rigorous imprisonment up to ten years, and may also face a fine.</code> | <code>[CONSTITUTION_ARTICLE_293] Borrowing by States (1) Subject to the provisions of this article, the executive power of a State extends to borrowing within the territory of India upon the security of the Consolidated Fund of the State within such limits, if any, as may from time to time be fixed by the Legislature of such State by law and to the giving of guarantees within such limits, if any, as may be so fixed (2) The Government of India may, subject to such conditions as may be laid down by or under any law made by Parliament, make loans to any State or, so long as any limits fixed under Article 292 are not exceeded, give guarantees in respect of loans raised by any State, and any sums required for the purpose of making such loans shall be charged on the Consolidated Fund of India (3) A State may not without the consent of the Government of India raise any loan if there is still outstanding any part of a loan which has been made to the State by the Government of India or by its predece...</code> |
| <code>[IPC_SECTION_344] According to Whoever wrongfully confines any person for ten days, or more, shall be punished with imprisonment of either description for a term which may extend to three years, and shall also be liable to fine. IPC 344 in Simple Words Section 344 of the states that anyone who wrongfully confines a person for ten days or more can be punished with imprisonment for up to three years and may also be fined.</code> | <code>According to Whoever wrongfully confines any person for ten days, or more, shall be punished with imprisonment of either description for a term which may extend to three years, and shall also be liable to fine. IPC 344 in Simple Words Section 344 of the states that anyone who wrongfully confines a person for ten days or more can be punished with imprisonment for up to three years and may also be fined.</code> | <code>[CRPC_SECTION_296] Section 296, The evidence of any person whose evidence is of a formal character may be given by affidavit and may, subject to all just exceptions, be read in evidence in any inquiry, trial or other proceeding under this Code. The Court may, if it thinks fit, and shall, on the application of the prosecution or the accused, summon and examine any such person as to the facts contained in his affidavit.</code> |
| <code>[CRPC_SECTION_263] Section 263, In every case tried summarily, the Magistrate shall enter, in such form as the Stale Government may direct, the following particulars, namely— the serial number of the case; the date of the commission of the offence; the date of the report of complaint; the name of the complainant (if any); the name, parentage and residence of the accused; the offence complained of and the offence (if any) proved, and in cases coming under clause (ii), clause (iii) or clause (iv) of Sub-Section (1) of section 260, the value of the property in respect of which the offence has been committed; the plea of the accused and his examination (if any); the finding; the sentence or other final order; the date on which proceedings terminated.</code> | <code>Section 263, In every case tried summarily, the Magistrate shall enter, in such form as the Stale Government may direct, the following particulars, namely— the serial number of the case; the date of the commission of the offence; the date of the report of complaint; the name of the complainant (if any); the name, parentage and residence of the accused; the offence complained of and the offence (if any) proved, and in cases coming under clause (ii), clause (iii) or clause (iv) of Sub-Section (1) of section 260, the value of the property in respect of which the offence has been committed; the plea of the accused and his examination (if any); the finding; the sentence or other final order; the date on which proceedings terminated.</code> | <code>[CRPC_SECTION_342] Section 342, Any Court dealing with an application made to it for filing a complaint under section 340 or an appeal under section 341, shall have power to make such order as to costs as may be just.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.0.1
- Transformers: 4.50.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
SantiagoSanchezF/BiomedBERT_mgnify_studies
|
SantiagoSanchezF
| 2025-04-03T11:20:21Z
| 0
| 0
| null |
[
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:SantiagoSanchezF/mgnify_study_descriptions",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2025-04-03T09:35:08Z
|
---
license: apache-2.0
language:
- en
base_model:
- microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
pipeline_tag: fill-mask
datasets:
- SantiagoSanchezF/mgnify_study_descriptions
---
# Model Card for Model ID
We fine-tuned BiomedBERT using study descriptions from metagenomic projects sourced from MGnify. We applied MLM to unlabelled text data, specifically focusing on the project study descriptions. By fine-tuning the model on domain-specific text, the model now better understands the language and nuances found in metagenomics study description, which helps improve the performance of biome classification tasks.
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** SantiagoSanchezF
- **Model type:** MLM
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model:** microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
### Downstream Use [optional]
This model isthe base of SantiagoSanchezF/trapiche-biome-classifier
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
The model was domain adapted by applying masked language modeling (MLM) to a corpus of study descriptions derived from metagenomic projects in MGnify. The input text was tokenized with a maximum sequence length of 256 tokens. A data collator was configured to randomly mask 15% of the input tokens for the MLM task. Training was performed with a batch size of 8, over 3 epochs, and with a learning rate of 5e-5.
## Citation [optional]
TBD
|
KarimKhalil/whisper-large-v3-arabic
|
KarimKhalil
| 2025-04-03T11:19:06Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:18:51Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
justmalhar/fluent-ui-dev-8b
|
justmalhar
| 2025-04-03T11:18:23Z
| 0
| 0
| null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-04-03T11:17:38Z
|
---
license: mit
tags:
- unsloth
---
|
Dhia-GB/sai-tokenizer
|
Dhia-GB
| 2025-04-03T11:16:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:16:31Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgambettaphd/M_llm3_gen5_run0_W_doc1000_synt64_SYNLAST
|
dgambettaphd
| 2025-04-03T11:16:34Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:16:17Z
|
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso18/fe23b6e4-5daa-4fd6-8e16-acd4016fbd64
|
lesso18
| 2025-04-03T11:14:25Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b",
"base_model:adapter:unsloth/llama-3-8b",
"license:llama3",
"region:us"
] | null | 2025-04-03T09:25:32Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe23b6e4-5daa-4fd6-8e16-acd4016fbd64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 38fb448798fed8c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/38fb448798fed8c0_train_data.json
type:
field_instruction: question
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso18/fe23b6e4-5daa-4fd6-8e16-acd4016fbd64
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000218
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/38fb448798fed8c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 180
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 567e9cc6-fbe5-4f94-8ad4-e320c190cb47
wandb_project: 18a
wandb_run: your_name
wandb_runid: 567e9cc6-fbe5-4f94-8ad4-e320c190cb47
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fe23b6e4-5daa-4fd6-8e16-acd4016fbd64
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000218
- train_batch_size: 4
- eval_batch_size: 4
- seed: 180
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 1.1453 |
| 0.8416 | 0.3894 | 500 | 0.8411 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
openthaigpt/openthaigpt-r1-32b-instruct
|
openthaigpt
| 2025-04-03T11:14:00Z
| 206
| 1
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"openthaigpt",
"qwen",
"reasoning",
"conversational",
"th",
"en",
"arxiv:2504.01789",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-25T07:24:04Z
|
---
license: other
license_name: qwen
language:
- th
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- openthaigpt
- qwen
- reasoning
model-index:
- name: openthaigpt-r1-32b-instruct
results:
- task:
type: reasoning
dataset:
name: SkyThought
type: mathematical_reasoning
metrics:
- name: AIME24-TH
type: accuracy
value: 56.67
- name: AIME24
type: accuracy
value: 63.36
source:
name: 🇹🇭 OpenThaiGPT R1 Benchmark
url: https://openthaigpt.aieat.or.th/
---
# 🇹🇭 OpenThaiGPT R1 32b

[More Info](https://openthaigpt.aieat.or.th/)
🇹🇭 **OpenThaiGPT R1 32b** is an advanced 32-billion-parameter Thai language reasoning model that outperforms larger models like DeepSeek R1 70b and Typhoon R1 70b, while being less than half their size. This model excels at complex reasoning tasks, including mathematics, logic, and code reasoning in Thai language.
## Highlights
- **State-of-the-art Thai reasoning model**, outperforming larger models on mathematical and logical reasoning tasks
- **Explicit reasoning capabilities** with the ability to show step-by-step thought processes
- **Significantly smaller size** (32b) while outperforming 70b models
- **Specialized for Thai language reasoning** including complex mathematics and logic problems
- **High performance on code reasoning** in both Thai and English
## Benchmark Results
| **SkyThought** | **OpenThaiGPT R1 32b** | **DeepSeek R1 70b** | **Typhoon R1 Distill 70b** |
|----------------------|-----------------------------------------------------------------------|--------------------------|----------------------------|
| **AIME24-TH** | <b>56.67</b> | 33.33 | 53.33 |
| **AIME24** | <b>63.36</b> | 53.33 | 53.33 |
| **MATH500-TH** | <b>83.8</b> | 75.4 | 81 |
| **MATH500** | 89.4 | 88.88 | <b>90.2</b> |
| **LiveCodeBench-TH** | <b>62.16</b> | 53.15 | 47.75 |
| **LiveCodeBench** | <b>69.67</b> | 64.97 | 54.79 |
| **OpenThaiEval** | 76.05 | 74.17 | <b>77.59</b> |
| **AVERAGE** | <b style="color:blue">71.58</b> | 63.31 | 65.42 |
## Recommended System Prompt
```
<No system prompt>
```
## Model Technical Report
https://arxiv.org/abs/2504.01789
If OpenThaiGPT has been beneficial for your work, kindly consider citing it as follows:
```tex
@misc{yuenyong2025openthaigpt16r1thaicentric,
title={OpenThaiGPT 1.6 and R1: Thai-Centric Open Source and Reasoning Large Language Models},
author={Sumeth Yuenyong and Thodsaporn Chay-intr and Kobkrit Viriyayudhakorn},
year={2025},
eprint={2504.01789},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.01789},
}
```
## How to use
### Online Web Interface
https://chindax.iapp.co.th
### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "openthaigpt/openthaigpt-r1-32b-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "จงหาพื้นที่ของวงกลมที่มีรัศมี 7 หน่วย"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384,
temperature=0.6
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### vLLM
1. Install VLLM (https://github.com/vllm-project/vllm)
2. Run server
```bash
vllm serve openthaigpt/openthaigpt-r1-32b --tensor-parallel-size 2
```
* Note, change `--tensor-parallel-size 2` to the amount of available GPU cards.
3. Run inference (CURL example)
```bash
curl -X POST 'http://127.0.0.1:8000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-d '{
"model": "openthaigpt/openthaigpt-r1-32b-instruct",
"messages": [
{
"role": "user",
"content": "จงหาพื้นที่ของวงกลมที่มีรัศมี 7 หน่วย"
}
],
"max_tokens": 16384,
"temperature": 0.6,
"top_p": 0.95,
"top_k": 40
}'
```
### GPU Memory Requirements
| **Number of Parameters** | **FP 16 bits** | **8 bits (Quantized)** | **4 bits (Quantized)** |
|------------------|----------------|------------------------|------------------------|
| **32b** | 64 GB | 32 GB | 16 GB |
## Chat Template
```python
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
```
## Licenses
* This model is available for **Research** and **Commercial uses** under the specified terms. Please see the LICENSE file for more information.
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
### OpenThaiGPT Team
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fcd9c426d942eaf4d1ebd30/e8gT15eRfNbyEZhu-UzMX.png" width="200px">
* Kobkrit Viriyayudhakorn ([email protected] / [email protected])
* Sumeth Yuenyong ([email protected])
* Thodsaporn Chay-intr ([email protected])
## Sponsors
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fcd9c426d942eaf4d1ebd30/zSEA_n0cIOZk5pV_t2qii.png" width="400px">
* ได้รับการสนับสนุน GPU Nvidia H100 x 8 จากบริษัท บริษัท สยาม เอไอ คอร์เปอเรชั่น จำกัด: https://siam.ai/
* ได้รับทุนวิจัยสนับสนุนจากกองทุนส่งเสริมวิทยาศาสตร์ วิจัยและนวัตกรรม โดยหน่วยบริหารและจัดการทุนด้านการเพิ่มความสามารถในการแข่งขันของประเทศ (บพข.) ร่วมกับ บริษัท ไอแอพพ์เทคโนโลยี จำกัด ซึ่งมี สมาคมผู้ประกอบการปัญญาประดิษฐ์ประเทศไทย เป็นผู้ดำเนินงานโครงการ
<i>Disclaimer: Provided responses are not guaranteed.</i>
|
xw17/Phi-3-mini-4k-instruct_finetuned_2_def_lora3
|
xw17
| 2025-04-03T11:13:53Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:13:47Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmeddoma/lijkoikl
|
ahmeddoma
| 2025-04-03T11:12:31Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:ByteDance/InfiniteYou",
"base_model:adapter:ByteDance/InfiniteYou",
"license:pddl",
"region:us"
] |
text-to-image
| 2025-04-03T11:12:28Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Untitled.jpg
base_model: ByteDance/InfiniteYou
instance_prompt: null
license: pddl
---
# doma
<Gallery />
## Download model
[Download](/ahmeddoma/lijkoikl/tree/main) them in the Files & versions tab.
|
jesusgs01/results_qwen_fold_5
|
jesusgs01
| 2025-04-03T11:12:12Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T22:57:34Z
|
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: results_qwen_fold_5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for results_qwen_fold_5
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jesusgs01/results_qwen_fold_5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.48.3
- Pytorch: 2.1.2
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Mael7307/Llama-3.2-3B-Instruct_CoT-30steps
|
Mael7307
| 2025-04-03T11:10:56Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T11:09:17Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Mael7307
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kostiantynk-outlook/5e77648f-3b5c-4cd2-8474-e638ee5c73c2
|
kostiantynk-outlook
| 2025-04-03T11:06:41Z
| 0
| 0
|
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"region:us"
] | null | 2025-04-03T11:06:13Z
|
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/SmolLM-1.7B-Instruct
model-index:
- name: kostiantynk-outlook/5e77648f-3b5c-4cd2-8474-e638ee5c73c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk-outlook/5e77648f-3b5c-4cd2-8474-e638ee5c73c2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aisys2803/DeepSeek-R1-1-5B-SYS-lora-new
|
aisys2803
| 2025-04-03T11:04:13Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T10:27:16Z
|
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pavi1ee/distilbert-base-uncased-lora-IMDB-text-classification-new
|
pavi1ee
| 2025-04-03T11:01:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T11:01:32Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.