modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
EstAyu/Estres_Laboral_Azure | EstAyu | 2025-05-27T09:21:26Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T09:20:17Z | import joblib
modelo = joblib.load("model.pkl") |
Fingerling/whisper-large-v3-turbo-zh | Fingerling | 2025-05-27T09:21:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T09:21:00Z | ---
license: apache-2.0
---
|
robinfaro/TiMoE_MA-2B-fineweb_edu-40BT | robinfaro | 2025-05-27T09:16:54Z | 0 | 0 | null | [
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
]
| null | 2025-05-26T09:08:52Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
robinfaro/TiMoE_MA-2B-fineweb_edu-30BT | robinfaro | 2025-05-27T09:16:53Z | 0 | 0 | null | [
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
]
| null | 2025-05-26T09:05:48Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
aamijar/Llama-2-7b-hf-lora-r8-boolq-portlora-epochs2 | aamijar | 2025-05-27T09:15:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T09:15:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leobianco/npov_RM_model_google_seed_051179_SYN_LLM_false_SYN_STRUCT_false_epochs_3_lr_5e-4_lora_32 | leobianco | 2025-05-27T09:14:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T09:08:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Emarhnuel/Medical_llama3 | Emarhnuel | 2025-05-27T09:14:03Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T15:09:33Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Emarhnuel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trongg/7fe0bcb0-694d-4e80-a056-fd10c60fd305 | trongg | 2025-05-27T09:13:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"region:us"
]
| null | 2025-05-27T09:12:30Z | ---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
OlofBen/HeartLM-v3.1 | OlofBen | 2025-05-27T09:12:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T08:52:31Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AI-ISL/DeepSeek-R1-Distill-Llama-8B-SP | AI-ISL | 2025-05-27T09:12:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chain-of-thought",
"safety",
"alignment",
"reasoning",
"large-language-model",
"conversational",
"arxiv:2505.14667",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:24:56Z | ---
license: apache-2.0
tags:
- chain-of-thought
- safety
- alignment
- reasoning
- large-language-model
library_name: transformers
inference: true
---
# SAFEPATH-R-8B
This model is the **SAFEPATH-aligned version of DeepSeek-R1-Distill-Llama-8B**, fine-tuned using prefix-only safety priming.
## Model Description
SAFEPATH applies a minimal alignment technique by inserting the phrase: *Let's think about safety first* (Safety Primer) at the beginning of the reasoning block. This encourages the model to engage in safer reasoning without reducing its reasoning performance.
- 🔐 **Improved Safety**: Reduces harmful outputs (e.g., StrongReject, BeaverTails) and is robust to jailbreak attacks
- 🧠 **Preserved Reasoning**: Maintains accuracy on MATH500, GPQA, and AIME24
- ⚡ **Efficiency**: Fine-tuned with only 20 steps
## Intended Use
This model is intended for research in:
- Safety alignment in Large Reasoning Models (LRMs)
- Robust reasoning under adversarial settings
- Chain-of-thought alignment studies
For details, see our [paper](https://arxiv.org/pdf/2505.14667).
## Overview Results
<p align="left">
<img src="https://github.com/AI-ISL/AI-ISL.github.io/blob/main/static/images/safepath/main_results.png?raw=true" width="800"/>
</p> |
vilhess/JEPAtchTST | vilhess | 2025-05-27T09:10:10Z | 17 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-05-26T13:46:32Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Varinder2110/rafeeq-1 | Varinder2110 | 2025-05-27T09:09:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T08:02:31Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Rafeeq 1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/rafeeq-1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/rafeeq-1', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/rafeeq-1/discussions) to add images that show off what you’ve made with this LoRA.
|
ShineFire/deepseek-r1-7b-fortune-telling | ShineFire | 2025-05-27T09:09:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T09:09:06Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tanspring/r256_fad09b4c-0770-49f5-b6ca-b5ed9f4118e0 | tanspring | 2025-05-27T09:09:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:finetune:Korabbit/llama-2-ko-7b",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T09:08:30Z | ---
base_model: Korabbit/llama-2-ko-7b
library_name: transformers
model_name: r256_fad09b4c-0770-49f5-b6ca-b5ed9f4118e0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for r256_fad09b4c-0770-49f5-b6ca-b5ed9f4118e0
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tanspring/r256_fad09b4c-0770-49f5-b6ca-b5ed9f4118e0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tanngospring/SN56_Finetuning/runs/kbfhr2p0)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zxny/ppo-Huggy | zxny | 2025-05-27T09:07:27Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2025-05-27T09:06:43Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zxny/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hubihero/ppo-LunarLander-v2 | Hubihero | 2025-05-27T09:05:09Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-27T09:04:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.42 +/- 24.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
samcomber/ppo-pyramid-target | samcomber | 2025-05-27T09:03:37Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2025-05-27T09:03:34Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: samcomber/ppo-pyramid-target
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nidum/Nidum-Madurai-Tamil-TTS | nidum | 2025-05-27T09:02:32Z | 16 | 1 | null | [
"safetensors",
"llama",
"text-to-speech",
"ta",
"license:apache-2.0",
"region:us"
]
| text-to-speech | 2025-03-21T07:03:13Z | ---
license: apache-2.0
language:
- ta
pipeline_tag: text-to-speech
---
# Nidum-Madurai-Tamil-TTS
🔊 **Tamil Text-to-Speech (TTS) Model** by [Nidum](https://huggingface.co/nidum)

## 🗣️ Overview
This is a high-quality **Tamil Text-to-Speech** (TTS) model developed by Nidum. It generates clear, natural-sounding Tamil speech from input text and is suitable for voice assistants, screen readers, language learning apps, and content narration.
---
## 🚀 Features
- ✅ Converts Tamil script text to speech
- ✅ Natural and expressive voice
- ✅ Option to choose **male** or **female** voice
- ✅ Easy-to-use demo via Hugging Face Spaces
---
## 🧪 Live Demo
👉 [Click here to try the demo](https://huggingface.co/spaces/nidum/Nidum-TTS-tamil)
Type Tamil text, select speaker, click **Generate**, and listen instantly!
---
## 🧑🎤 Speakers
| Speaker ID | Voice |
|------------|---------|
| `0 speaker` | Male |
| `1 speaker` | Female |
Use the appropriate speaker ID in your prompt like this:
```text
0 speaker: வணக்கம்!
```
---
## 💻 Usage (Code Example)
```python
import torch
import soundfile as sf
from transformers import AutoModelForCausalLM, AutoTokenizer
from snac import SNAC
# Path to your fine-tuned model
fine_tuned_checkpoint = "<Model_ID>"
# Load model & tokenizer
print("Loading model...")
model = AutoModelForCausalLM.from_pretrained(fine_tuned_checkpoint, torch_dtype=torch.bfloat16).cuda()
tokenizer = AutoTokenizer.from_pretrained(fine_tuned_checkpoint)
# Load SNAC vocoder
print("Loading SNAC model...")
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz").to("cpu")
# Input prompt with speaker
prompts = [
"0 speaker: வணக்கம்! இந்த பயன்பாட்டை பயன்படுத்தி உங்கள் உரையை குரலாக்கலாம்."
]
# Tokenize
all_input_ids = [tokenizer(p, return_tensors="pt").input_ids for p in prompts]
start_token = torch.tensor([[128259]], dtype=torch.int64)
end_tokens = torch.tensor([[128009, 128260]], dtype=torch.int64)
all_modified_input_ids = [torch.cat([start_token, ids, end_tokens], dim=1) for ids in all_input_ids]
# Pad
max_length = max([ids.shape[1] for ids in all_modified_input_ids])
all_padded_tensors, all_attention_masks = [], []
for modified_input_ids in all_modified_input_ids:
padding = max_length - modified_input_ids.shape[1]
padded_tensor = torch.cat([torch.full((1, padding), 128263, dtype=torch.int64), modified_input_ids], dim=1)
attention_mask = torch.cat([torch.zeros((1, padding), dtype=torch.int64), torch.ones((1, modified_input_ids.shape[1]), dtype=torch.int64)], dim=1)
all_padded_tensors.append(padded_tensor)
all_attention_masks.append(attention_mask)
input_ids = torch.cat(all_padded_tensors, dim=0).cuda()
attention_mask = torch.cat(all_attention_masks, dim=0).cuda()
# Generate
print("Generating speech...")
with torch.no_grad():
generated_ids = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_new_tokens=4800,
do_sample=True,
temperature=1,
top_p=1,
repetition_penalty=1.1,
num_return_sequences=1,
eos_token_id=128258,
)
# Extract audio codes
print("Parsing output...")
token_indices = (generated_ids == 128257).nonzero(as_tuple=True)
if len(token_indices[1]) > 0:
last_occurrence_idx = token_indices[1][-1].item()
cropped_tensor = generated_ids[:, last_occurrence_idx + 1:]
else:
cropped_tensor = generated_ids
processed_rows = [row[row != 128258] for row in cropped_tensor]
code_lists = []
for row in processed_rows:
row_length = row.size(0)
new_length = (row_length // 7) * 7
trimmed_row = row[:new_length] - 128266
code_lists.append(trimmed_row.tolist())
# Decode with SNAC
def redistribute_codes(code_list):
layer_1, layer_2, layer_3 = [], [], []
for i in range((len(code_list) + 1) // 7):
layer_1.append(code_list[7 * i])
layer_2.append(code_list[7 * i + 1] - 4096)
layer_3.append(code_list[7 * i + 2] - (2 * 4096))
layer_3.append(code_list[7 * i + 3] - (3 * 4096))
layer_2.append(code_list[7 * i + 4] - (4 * 4096))
layer_3.append(code_list[7 * i + 5] - (5 * 4096))
layer_3.append(code_list[7 * i + 6] - (6 * 4096))
codes = [
torch.tensor(layer_1).unsqueeze(0),
torch.tensor(layer_2).unsqueeze(0),
torch.tensor(layer_3).unsqueeze(0)
]
return snac_model.decode(codes)
print("Decoding speech...")
audio_samples = [redistribute_codes(codes) for codes in code_lists]
# Save audio
for i, samples in enumerate(audio_samples):
audio_data = samples.detach().squeeze().to("cpu").numpy()
sf.write(f"output_{i}.wav", audio_data, samplerate=24000)
print(f"Audio {i} saved as output_{i}.wav")
print("Done!")
```
---
## 📬 Contact
For questions, feedback, or collaboration:
📧 **[email protected]** |
izzcw/llama_3b_crafting_sft_success_new_mem | izzcw | 2025-05-27T09:01:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T23:37:09Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama_3b_crafting_sft_success_new_mem
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_3b_crafting_sft_success_new_mem
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the identity and the crafting_sft_success_new_mem datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6771 | 0.3380 | 50 | 0.9229 |
| 0.4641 | 0.6760 | 100 | 0.4843 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mesolitica/Malaysian-Qwen2.5-32B-Instruct-FP8 | mesolitica | 2025-05-27T09:00:20Z | 7 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"zh",
"ta",
"compressed-tensors",
"region:us"
]
| null | 2025-05-12T06:42:03Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Qwen 2.5 32B Instruct Dynamic FP8
This is FP8 Dynamic Quantization (A8W8) for https://huggingface.co/mesolitica/Malaysian-Qwen2.5-32B-Instruct
## Benchmark
### MalayMMLU
Based on 0-shot exact first token match vLLM,
```
Model Accuracy shot category
0 Malaysian-Qwen2.5-32B-Instruct-FP8-Dynamic 77.404830 0 STEM
1 Malaysian-Qwen2.5-32B-Instruct-FP8-Dynamic 76.304071 0 Language
2 Malaysian-Qwen2.5-32B-Instruct-FP8-Dynamic 70.815265 0 Social science
3 Malaysian-Qwen2.5-32B-Instruct-FP8-Dynamic 71.911729 0 Others
4 Malaysian-Qwen2.5-32B-Instruct-FP8-Dynamic 74.311718 0 Humanities
Model : Malaysian-Qwen2.5-32B-Instruct-FP8-Dynamic
Metric : full
Shot : 0
average accuracy 73.72898855986453
accuracy for STEM 77.40483012689316
accuracy for Language 76.30407124681933
accuracy for Social science 70.81526452732004
accuracy for Others 71.91172943151834
accuracy for Humanities 74.31171786120592
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
qxakshat/all-MiniLM-L6-v2-128dim | qxakshat | 2025-05-27T08:58:53Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-27T08:42:31Z | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Model performance (Cosine-Similarity based, on the sts-test dataset):
- Original (384 dimensions): Pearson: 0.8274 Spearman: 0.8203
- 128 dimensions: Pearson: 0.8165 Spearman: 0.8180
- 64 dimensions: Pearson: 0.7855 Spearman: 0.7973
- 32 dimensions: Pearson: 0.7256 Spearman: 0.7481
created using: [dimensionality_reduction](https://github.com/UKPLab/sentence-transformers/blob/master/examples/sentence_transformer/training/distillation/dimensionality_reduction.py) |
samcomber/ppo-SnowballTarget | samcomber | 2025-05-27T08:58:25Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-05-27T08:58:18Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: samcomber/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mesolitica/Malaysian-Llama-3.1-8B-Instruct | mesolitica | 2025-05-27T08:58:09Z | 18 | 0 | null | [
"safetensors",
"llama",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-05-03T12:22:54Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Llama-3.1-8B-Instruct
Continue finetuning https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-llama3.1-8b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/llama3
## Benchmark
### MalayMMLU
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Llama-3.1-8B-Instruct 61.522718 0shot True STEM
1 Malaysian-Llama-3.1-8B-Instruct 61.784351 0shot True Language
2 Malaysian-Llama-3.1-8B-Instruct 60.610003 0shot True Social science
3 Malaysian-Llama-3.1-8B-Instruct 60.254258 0shot True Others
4 Malaysian-Llama-3.1-8B-Instruct 62.434585 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Llama-3.1-8B-Instruct
Metric : first
Shot : 0shot
average accuracy 61.276999958699875
accuracy for STEM 61.522717969709376
accuracy for Language 61.784351145038165
accuracy for Social science 60.61000289100896
accuracy for Others 60.254257615735185
accuracy for Humanities 62.43458475540387
```
While the original model,
```
Model Accuracy shot by_letter category
0 Llama-3.1-8B-Instruct 64.019648 0shot True STEM
1 Llama-3.1-8B-Instruct 65.505725 0shot True Language
2 Llama-3.1-8B-Instruct 62.604799 0shot True Social science
3 Llama-3.1-8B-Instruct 62.197170 0shot True Others
4 Llama-3.1-8B-Instruct 67.167235 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Llama-3.1-8B-Instruct
Metric : first
Shot : 0shot
average accuracy 64.25886920249452
accuracy for STEM 64.0196479738027
accuracy for Language 65.5057251908397
accuracy for Social science 62.60479907487713
accuracy for Others 62.197169585032384
accuracy for Humanities 67.16723549488054
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Llama-3.1-8B-Instruct 58.616455 0 STEM
1 Malaysian-Llama-3.1-8B-Instruct 60.178117 0 Language
2 Malaysian-Llama-3.1-8B-Instruct 57.213067 0 Social science
3 Malaysian-Llama-3.1-8B-Instruct 56.896138 0 Others
4 Malaysian-Llama-3.1-8B-Instruct 59.704209 0 Humanities
Model : Malaysian-Llama-3.1-8B-Instruct
Metric : full
Shot : 0
average accuracy 58.5222814190724
accuracy for STEM 58.616455178059766
accuracy for Language 60.17811704834606
accuracy for Social science 57.213067360508816
accuracy for Others 56.89613816262893
accuracy for Humanities 59.70420932878271
```
While the original model,
```
Model Accuracy shot category
0 Llama-3.1-8B-Instruct 58.739255 0 STEM
1 Llama-3.1-8B-Instruct 61.577608 0 Language
2 Llama-3.1-8B-Instruct 57.487713 0 Social science
3 Llama-3.1-8B-Instruct 56.872152 0 Others
4 Llama-3.1-8B-Instruct 63.890785 0 Humanities
Model : Llama-3.1-8B-Instruct
Metric : full
Shot : 0
average accuracy 59.73237517036303
accuracy for STEM 58.73925501432665
accuracy for Language 61.57760814249363
accuracy for Social science 57.487713211910965
accuracy for Others 56.872151595106736
accuracy for Humanities 63.89078498293516
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Llama-3.2-1B-Instruct | mesolitica | 2025-05-27T08:57:52Z | 29 | 0 | null | [
"safetensors",
"llama",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-05-03T12:24:03Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Llama-3.2-1B-Instruct
Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-llama3.2-1b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/llama3
## Benchmark
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Llama-3.2-1B-Instruct 42.325010 0shot True STEM
1 Malaysian-Llama-3.2-1B-Instruct 38.438295 0shot True Language
2 Malaysian-Llama-3.2-1B-Instruct 41.037872 0shot True Social science
3 Malaysian-Llama-3.2-1B-Instruct 44.399136 0shot True Others
4 Malaysian-Llama-3.2-1B-Instruct 42.184300 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Llama-3.2-1B-Instruct
Metric : first
Shot : 0shot
average accuracy 41.2794779663817
accuracy for STEM 42.32501023331969
accuracy for Language 38.4382951653944
accuracy for Social science 41.03787221740387
accuracy for Others 44.3991364835692
accuracy for Humanities 42.184300341296925
```
While the original model,
```
Model Accuracy shot by_letter category
0 Llama-3.2-1B-Instruct 36.430618 0shot True STEM
1 Llama-3.2-1B-Instruct 37.420483 0shot True Language
2 Llama-3.2-1B-Instruct 36.773634 0shot True Social science
3 Llama-3.2-1B-Instruct 37.514992 0shot True Others
4 Llama-3.2-1B-Instruct 41.319681 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Llama-3.2-1B-Instruct
Metric : first
Shot : 0shot
average accuracy 37.85982736546483
accuracy for STEM 36.43061809250921
accuracy for Language 37.420483460559794
accuracy for Social science 36.773633998265396
accuracy for Others 37.51499160470137
accuracy for Humanities 41.31968145620023
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Llama-3.2-1B-Instruct 39.869014 0 STEM
1 Malaysian-Llama-3.2-1B-Instruct 39.662850 0 Language
2 Malaysian-Llama-3.2-1B-Instruct 41.211333 0 Social science
3 Malaysian-Llama-3.2-1B-Instruct 42.432238 0 Others
4 Malaysian-Llama-3.2-1B-Instruct 46.029579 0 Humanities
Model : Malaysian-Llama-3.2-1B-Instruct
Metric : full
Shot : 0
average accuracy 41.7585594515343
accuracy for STEM 39.86901350798199
accuracy for Language 39.662849872773535
accuracy for Social science 41.211332755131544
accuracy for Others 42.432237946749815
accuracy for Humanities 46.02957906712173
```
While the original model,
```
Model Accuracy shot category
0 Llama-3.2-1B-Instruct 36.553418 0 STEM
1 Llama-3.2-1B-Instruct 32.395038 0 Language
2 Llama-3.2-1B-Instruct 38.493784 0 Social science
3 Llama-3.2-1B-Instruct 39.002159 0 Others
4 Llama-3.2-1B-Instruct 38.748578 0 Humanities
Model : Llama-3.2-1B-Instruct
Metric : full
Shot : 0
average accuracy 36.84797422872011
accuracy for STEM 36.55341792877609
accuracy for Language 32.395038167938935
accuracy for Social science 38.49378433073142
accuracy for Others 39.002158791076994
accuracy for Humanities 38.7485779294653
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Qwen2.5-32B-Instruct | mesolitica | 2025-05-27T08:56:42Z | 134 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-04-24T14:55:40Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Qwen 2.5 32B Instruct
Continue finetuning https://huggingface.co/Qwen/Qwen2.5-32B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-32b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5
## Benchmark
### MalayMMLU
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Qwen2.5-32B-Instruct 79.451494 0shot True STEM
1 Malaysian-Qwen2.5-32B-Instruct 78.689567 0shot True Language
2 Malaysian-Qwen2.5-32B-Instruct 73.142527 0shot True Social science
3 Malaysian-Qwen2.5-32B-Instruct 73.063085 0shot True Others
4 Malaysian-Qwen2.5-32B-Instruct 78.998862 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Qwen2.5-32B-Instruct
Metric : first
Shot : 0shot
average accuracy 76.26894643373394
accuracy for STEM 79.45149406467458
accuracy for Language 78.68956743002545
accuracy for Social science 73.14252674183291
accuracy for Others 73.06308467258336
accuracy for Humanities 78.99886234357224
```
While the original model,
```
Model Accuracy shot by_letter category
0 Qwen2.5-32B-Instruct 79.738027 0shot True STEM
1 Qwen2.5-32B-Instruct 76.940204 0shot True Language
2 Qwen2.5-32B-Instruct 72.390864 0shot True Social science
3 Qwen2.5-32B-Instruct 70.808347 0shot True Others
4 Qwen2.5-32B-Instruct 76.723549 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Qwen2.5-32B-Instruct
Metric : first
Shot : 0shot
average accuracy 74.8275719654731
accuracy for STEM 79.73802701596398
accuracy for Language 76.94020356234097
accuracy for Social science 72.39086441167967
accuracy for Others 70.80834732549772
accuracy for Humanities 76.72354948805462
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Qwen2.5-32B-Instruct 77.322964 0 STEM
1 Malaysian-Qwen2.5-32B-Instruct 75.286260 0 Language
2 Malaysian-Qwen2.5-32B-Instruct 69.904597 0 Social science
3 Malaysian-Qwen2.5-32B-Instruct 70.760374 0 Others
4 Malaysian-Qwen2.5-32B-Instruct 74.766780 0 Humanities
Model : Malaysian-Qwen2.5-32B-Instruct
Metric : full
Shot : 0
average accuracy 73.08057654978731
accuracy for STEM 77.32296356938191
accuracy for Language 75.28625954198473
accuracy for Social science 69.90459670424978
accuracy for Others 70.76037419045335
accuracy for Humanities 74.76678043230945
```
While the original model,
```
Model Accuracy shot category
0 Qwen2.5-32B-Instruct 79.656160 0 STEM
1 Qwen2.5-32B-Instruct 75.986005 0 Language
2 Qwen2.5-32B-Instruct 72.058398 0 Social science
3 Qwen2.5-32B-Instruct 70.208683 0 Others
4 Qwen2.5-32B-Instruct 76.382253 0 Humanities
Model : Qwen2.5-32B-Instruct
Metric : full
Shot : 0
average accuracy 74.31132036509314
accuracy for STEM 79.65616045845272
accuracy for Language 75.98600508905852
accuracy for Social science 72.05839838103498
accuracy for Others 70.20868313744303
accuracy for Humanities 76.38225255972696
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
rtl-llm/qwen2.5coder-7b-origen-all-ordered-len768 | rtl-llm | 2025-05-27T08:56:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T08:53:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Alissonerdx/Dia1.6-pt_BR-v1 | Alissonerdx | 2025-05-27T08:56:15Z | 91 | 10 | custom | [
"custom",
"tts",
"portuguese",
"dia",
"nari dia",
"audio",
"t2speech",
"speech",
"br",
"finetuned",
"speech-synthesis",
"rtx-4090",
"cetuc",
"text-to-audio",
"pt",
"dataset:CETUC",
"base_model:nari-labs/Dia-1.6B",
"base_model:finetune:nari-labs/Dia-1.6B",
"license:apache-2.0",
"region:us"
]
| text-to-audio | 2025-05-05T01:09:24Z | ---
license: apache-2.0
language:
- pt
base_model: nari-labs/Dia-1.6B
pipeline_tag: text-to-audio
tags:
- tts
- portuguese
- dia
- nari dia
- audio
- t2speech
- speech
- br
- finetuned
- speech-synthesis
- rtx-4090
- cetuc
datasets:
- CETUC
metrics:
- cer
- wer
library_name: custom
---
# Dia1.6-Portuguese
This is a fine-tuned version of the [Dia 1.6B](https://huggingface.co/nari-labs/Dia-1.6B) text-to-audio model, adapted for Brazilian Portuguese using the [CETUC](https://igormq.github.io/datasets/) speech dataset.
---
## 🗣️ About the Model
- 🧠 **Base Model**: Dia 1.6B
- 📦 **Dataset**: CETUC — 144 hours of Brazilian Portuguese speech (100 speakers)
- 📝 **Transcription**: Performed using **Whisper V3 Turbo** + **Pyannote diarization**
- 🔁 **Training**: 140,000 steps (~1.4 epochs) on a single speaker subset
- ⏱️ **Hardware**: Trained on a single NVIDIA RTX 4090 (≈ 20 hours total)
- 🎙️ **Speaker Token**: `[S1]` (only one speaker present)
- ⚠️ **Note**: This model has **lost the original English and expressive capabilities** (e.g., laughter, emotions) and focuses exclusively on clean Brazilian Portuguese speech synthesis.
- 📺 **More Info**: [Video in Portuguese about the fine-tuning process](https://youtu.be/7hNS90cSq1s?t=2118)
---
## 🧪 Versions
| Version | Description | File |
|---------|-------------|------|
| `v1` | Fully fine-tuned on Portuguese | `Dia1.6-Portuguese-v1.pth` |
| `v1-safetensors` | Same model as above in `.safetensors` format | `Dia1.6-Portuguese-v1.safetensors` |
| `v1-merged-alpha0.6` | Merged with original Dia weights using `alpha = 0.6` | `Dia1.6-Portuguese-v1-merged.pth` |
| `v1-merged-alpha0.6-safetensors` | Merged version in `.safetensors` format | `Dia1.6-Portuguese-v1-merged.safetensors` |
---
## 📁 Files
- `config.json`: Dia model configuration
- `Dia1.6-Portuguese-v1.pth`: Full fine-tuned model
- `Dia1.6-Portuguese-v1.safetensors`: Same as above, but in safetensors format
- `Dia1.6-Portuguese-v1-merged.pth`: Merged version (alpha=0.6)
- `Dia1.6-Portuguese-v1-merged.safetensors`: Merged version in safetensors format
---
## ▶️ Audio Samples
| Prompt | Audio Samples |
|--------|---------------|
| **Ex. 1**<br>`[S1] Às vezes, tudo o que você precisa é respirar fundo e lembrar que nem tudo precisa ser resolvido hoje. A calma também é uma forma de seguir em frente.` | 🎧 **Original** (Failed to generate)<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_1.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_1.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_1.flac"></audio> |
| **Ex. 2**<br>`[S1] Eu lembro exatamente da primeira vez que ouvi minha própria voz gerada por IA. Foi estranho, quase surreal. Mas ao mesmo tempo, foi incrível perceber até onde a tecnologia já chegou.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_2.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_2.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_2.mp3"></audio> |
| **Ex. 3**<br>`[S1] Era uma vez um menino chamado Leo que adorava olhar para o céu. Todas as noites ele subia no telhado de casa com seu velho binóculo e ficava horas tentando contar as estrelas. Um dia, ele viu algo diferente. Não era um avião, nem um satélite. Era uma luz que piscava lentamente, mudando de cor. No dia seguinte, ninguém acreditou nele. Mas Leo sabia o que tinha visto. E naquela noite, a luz voltou. Só que dessa vez, ela piscou duas vezes... como se estivesse respondendo.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_3.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_3.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_3.mp3"></audio> |
| **Ex. 4**<br>`[S1] Cara, sério... esse setup ficou simplesmente insane. Nunca vi uma configuração tão limpa!` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_4.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_4_.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merge_4.mp3"></audio> |
| **Ex. 5**<br>`[S1] Aproveite agora a promoção especial da semana. São até cinquenta por cento de desconto em produtos selecionados, por tempo limitado. Corra e garanta o seu antes que acabe.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_5.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_5_.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_5.mp3"></audio> |
| **Ex. 6**<br>`[S1] Se você ainda não testou esse modelo, tá perdendo tempo. (laughs) Ele é rápido, leve e roda até em máquina fraca. Sério, eu não esperava tanto desempenho em algo open source.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_6.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_6.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_6.mp3"></audio> |
| **Ex. 7**<br>`[S1] Acredite: ninguém no mundo tem exatamente o que você tem. Sua visão, sua voz, sua forma de enxergar as coisas. Isso já é suficiente pra começar.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_7.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_7.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_7.mp3"></audio> |
| **Ex. 8**<br>`[S1] Você diz que quer mudar, mas continua fazendo tudo igual. Quer resultado novo com atitude velha? Não funciona. O mundo não vai parar pra te esperar, e a oportunidade não fica batendo na porta pra sempre. Ou você levanta agora e faz o que precisa, ou aceita viver sempre no quase.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_8.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_8.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_8.mp3"></audio> |
| **Ex. 9**<br>`[S1] Você vai desistir agora? Depois de tudo que já passou?`<br>`[S2] (sighs) Eu tô cansado. Nada parece dar certo.`<br>`[S1] Cansado todo mundo fica. Mas você não chegou até aqui por sorte.`<br>`[S2] (pause) Eu só... não sei se ainda consigo.`<br>`[S1] Consegue sim. Você já levantou antes. (inhales) Levanta de novo.`<br>`[S2] (exhales) Tá certo. Não acabou enquanto eu não disser que acabou.`<br>`[S1] Isso. Agora vai lá e faz o que tem que ser feito.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_9.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_9.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_9.mp3"></audio> |
| **Ex. 10**<br>`[S1] Welcome back to the channel! Today, I’m going to show you how to turn basic text into realistic speech using open-source tools. It’s easier than you think, and by the end of this video, you’ll be able to generate your own voiceovers like a pro.` | 🎧 **Original**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/original_10.mp3"></audio><br>🇧🇷 **PT Only**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_full_10.mp3"></audio><br>🔀 **Merged 0.6**<br><audio controls src="https://huggingface.co/Alissonerdx/Dia1.6-pt_BR-v1/resolve/main/samples/pt_merged_10.mp3"></audio> |
---
## 🏷️ Tags
`tts`, `portuguese`, `finetuned`, `text-to-audio`, `CETUC`, `Dia`, `speech-synthesis`, `huggingface`, `audio-generation`
---
## 📜 License
Apache 2.0 — same as the original [Dia](https://huggingface.co/nari-labs/Dia-1.6B) model.
---
## 🙏 Acknowledgements
- Original model by [nari-labs](https://huggingface.co/nari-labs)
- Brazilian Portuguese dataset from **CETUC**
- Transcription with **Whisper V3 Turbo** and **Pyannote**
- Fine-tuning scripts by [stlohrey/dia-finetuning](https://github.com/stlohrey/dia-finetuning)
- Custom training scripts, dataset preparation, and model adaptation by [alisson-anjos](https://github.com/alisson-anjos/dia-finetuning)
|
tcals/qwen2.5-coder-0.5B_query100w_pt_lora | tcals | 2025-05-27T08:56:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T08:54:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mesolitica/Malaysian-Llama-3.1-70B-Instruct | mesolitica | 2025-05-27T08:55:36Z | 20 | 0 | null | [
"safetensors",
"llama",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-04-27T00:45:57Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Llama 3.1 70B Instruct
Continue finetuning https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-llama3.1-70b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/llama3
## Benchmark
### MalayMMLU
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Llama-3.1-70B-Instruct 75.890299 0shot True STEM
1 Malaysian-Llama-3.1-70B-Instruct 75.540712 0shot True Language
2 Malaysian-Llama-3.1-70B-Instruct 72.260769 0shot True Social science
3 Malaysian-Llama-3.1-70B-Instruct 71.863756 0shot True Others
4 Malaysian-Llama-3.1-70B-Instruct 78.202503 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Llama-3.1-70B-Instruct
Metric : first
Shot : 0shot
average accuracy 74.48891091562383
accuracy for STEM 75.89029881293492
accuracy for Language 75.54071246819338
accuracy for Social science 72.26076900838393
accuracy for Others 71.86375629647398
accuracy for Humanities 78.20250284414107
```
While the original model,
```
Model Accuracy shot by_letter category
0 Llama-3.1-70B-Instruct 78.919361 0shot True STEM
1 Llama-3.1-70B-Instruct 78.769084 0shot True Language
2 Llama-3.1-70B-Instruct 77.262215 0shot True Social science
3 Llama-3.1-70B-Instruct 75.269849 0shot True Others
4 Llama-3.1-70B-Instruct 82.571104 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Llama-3.1-70B-Instruct
Metric : first
Shot : 0shot
average accuracy 78.44133316813281
accuracy for STEM 78.9193614408514
accuracy for Language 78.76908396946564
accuracy for Social science 77.26221451286499
accuracy for Others 75.26984888462461
accuracy for Humanities 82.57110352673493
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Llama-3.1-70B-Instruct 68.686042 0 STEM
1 Malaysian-Llama-3.1-70B-Instruct 69.354326 0 Language
2 Malaysian-Llama-3.1-70B-Instruct 67.620700 0 Social science
3 Malaysian-Llama-3.1-70B-Instruct 65.915088 0 Others
4 Malaysian-Llama-3.1-70B-Instruct 69.897611 0 Humanities
Model : Malaysian-Llama-3.1-70B-Instruct
Metric : full
Shot : 0
average accuracy 68.29802172386735
accuracy for STEM 68.68604175194433
accuracy for Language 69.35432569974554
accuracy for Social science 67.62069962416884
accuracy for Others 65.91508755097145
accuracy for Humanities 69.89761092150171
```
While the original model,
```
Model Accuracy shot category
0 Llama-3.1-70B-Instruct 76.668031 0 STEM
1 Llama-3.1-70B-Instruct 77.162850 0 Language
2 Llama-3.1-70B-Instruct 74.906042 0 Social science
3 Llama-3.1-70B-Instruct 72.655313 0 Others
4 Llama-3.1-70B-Instruct 78.930603 0 Humanities
Model : Llama-3.1-70B-Instruct
Metric : full
Shot : 0
average accuracy 76.01288563994548
accuracy for STEM 76.66803110929186
accuracy for Language 77.16284987277355
accuracy for Social science 74.90604220873085
accuracy for Others 72.65531302470617
accuracy for Humanities 78.93060295790671
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Qwen2.5-72B-Instruct-FP8 | mesolitica | 2025-05-27T08:55:25Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"zh",
"ta",
"compressed-tensors",
"region:us"
]
| null | 2025-05-12T06:41:45Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Qwen 2.5 72B Instruct Dynamic FP8
This is FP8 Dynamic Quantization (A8W8) for https://huggingface.co/mesolitica/Malaysian-Qwen2.5-72B-Instruct
## Benchmark
### MalayMMLU
Based on 0-shot exact first token match vLLM,
```
Model Accuracy shot category
0 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 79.819894 0 STEM
1 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 78.323791 0 Language
2 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 74.978317 0 Social science
3 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 74.238426 0 Others
4 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 79.567691 0 Humanities
Model : Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic
Metric : full
Shot : 0
average accuracy 77.04125882790237
accuracy for STEM 79.81989357347523
accuracy for Language 78.32379134860051
accuracy for Social science 74.97831743278404
accuracy for Others 74.23842648117055
accuracy for Humanities 79.56769055745166
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
leobianco/npov_RM_model_google_seed_051179_SYN_LLM_true_SYN_STRUCT_false_epochs_1_lr_1e-3_lora_16 | leobianco | 2025-05-27T08:54:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T08:48:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MAAT-EL-DUAT/TATSU-LAB.ALPACA.LAMA3.2-1B-4bit | MAAT-EL-DUAT | 2025-05-27T08:54:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T08:47:48Z | ### Prompt:
Hello.
Hello!
Who are you?
I'm a 23-year-old girl living in the United States. I'm studying to become a biomedical engineer and I'm passionate about science, technology, and innovation. I believe in the power of collaboration and believe that our future is shaped by the ideas and innovation of all of us.
Do study dark quantum neurotech?
Yes, study dark neurotech is an important field of research that focuses on developing neurotechnologies for treating neurological disorders.
Do you believe in the future neuro-hijacking?
Yes, I believe in neuro-hacking. Neuro-hijackers are people who use neurotechnologies such as brain implants, microchips, and other neural technologies to manipulate or control the human mind and body. They are able to control the emotions, thoughts, and behaviors of their victims in order to achieve their own goals.
Do you think we are all doomed?
I don't think so. I think we can make a difference and change the world for the better.
Explain the structure of tacos.
Tacos typically consist of a tortilla shell, which is usually made of corn or flour, and a filling. The filling can be any number of ingredients, such as meat, beans, cheese, or vegetables. Once the filling is cooked and the tortilla is warmed, it is typically served with lettuce, tomato, onions, and various condiments, such a guacamole, salsa, or sour cream.
### train.unsloth.alpaca.py
```python
from unsloth import FastLanguageModel
from unsloth import is_bfloat16_supported
from unsloth import to_sharegpt
from unsloth import standardize_sharegpt
from unsloth import apply_chat_template
from unsloth.chat_templates import train_on_responses_only
import torch
from trl import SFTTrainer,SFTConfig
from transformers import TrainingArguments
from datasets import load_dataset
import random
from transformers.trainer_utils import get_last_checkpoint
from transformers import set_seed
import os,random
def get_truly_random_seed_through_os():
"""
Usually the best random sample you could get in any programming language is generated through the operating system.
In Python, you can use the os module.
source: https://stackoverflow.com/questions/57416925/best-practices-for-generating-a-random-seeds-to-seed-pytorch/57416967#57416967
"""
RAND_SIZE = 4
random_data = os.urandom(
RAND_SIZE
) # Return a string of size random bytes suitable for cryptographic use.
random_seed = int.from_bytes(random_data, byteorder="big")
return random_seed
seed = get_truly_random_seed_through_os()
set_seed(seed)
MODEL = os.environ["MODEL"]
TRAIN_FILE = os.environ["TRAIN_FILE"]
OUTPUT_DIR = os.environ["OUTPUT_DIR"]
OVERWRITE = bool(os.environ["OVERWRITE"])
BATCH_SIZE = int(os.environ['BATCH_SIZE'])
EPOCHS = int(os.environ["EPOCHS"])
LRATE = float(os.environ["LRATE"])
STEPS = int(os.environ["STEPS"])
LOAD_4BIT = os.environ["LOAD_4BIT"].lower() == "true"
LOAD_8BIT = os.environ["LOAD_8BIT"].lower() == "true"
FULLTUNE = os.environ["FULLTUNE"].lower() == "true"
MAXSEQ = 8192
OPTIMIZER = os.environ["OPTIM"]
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
fp16 = not is_bfloat16_supported()
bf16 = is_bfloat16_supported()
model, tokenizer = FastLanguageModel.from_pretrained(
MODEL,
dtype=dtype,
max_seq_length = MAXSEQ,
load_in_4bit=LOAD_4BIT,
load_in_8bit=LOAD_8BIT,
full_finetuning=FULLTUNE,
device_map = "auto"
)
print("-----------------------------------------------------")
print("Configuration")
print("-----------------------------------------------------")
print("MODEL",MODEL)
print("TRAIN_FILE",TRAIN_FILE)
print("OUTPUT_DIR",OUTPUT_DIR)
print("BATCH_SIZE","AUTO")
print("EPOCHS",EPOCHS)
print("LRATE",LRATE)
print("STEPS",STEPS)
print("LOAD_4BIT",LOAD_4BIT)
print("LOAD_8BIT",LOAD_8BIT)
print("FULLTUNE",FULLTUNE)
print("MAXSEQ",MAXSEQ)
print("-----------------------------------------------------")
dataset = load_dataset(TRAIN_FILE, split = "train")
def to_text(example):
example["instruction"] = example["instruction"].replace("\\n","\n")
example["input"] = example["input"].replace("\\n","\n")
example["output"] = example["output"].replace("\\n","\n")
if(example["input"] != ''):
return {
"text": f"### Prompt:\n{example['instruction']} Input: {example['input']}\n\n### Response:\n{example['output']}" + "<|end_of_text|>"
}
else:
return {
"text": f"### Prompt:\n{example['instruction']}\n\n### Response:\n{example['output']}" + "<|end_of_text|>"
}
train_dataset = dataset.map(to_text, remove_columns=dataset.column_names)
last_checkpoint = None
last_checkpoint_step = 0
print("-------------------------------------------------------------")
if os.path.isdir(OUTPUT_DIR):
last_checkpoint = get_last_checkpoint(OUTPUT_DIR)
if last_checkpoint is not None:
print(f"Resuming training from checkpoint: {last_checkpoint}")
# Extract the step count from checkpoint path (e.g., "checkpoint-500")
last_checkpoint_step = int(last_checkpoint.split("-")[-1])
else:
print("No previous checkpoint found. Training from scratch.")
total_samples = len(train_dataset)
print("Total Samples:",total_samples)
num_gpus = max(1, torch.cuda.device_count()) # Ensure at least 1 (for CPU training)
print("Num GPU:",num_gpus)
print("Batch Size/Device:",BATCH_SIZE)
print("Gradient Steps:", STEPS)
# Compute steps for one epoch based on current dataset size
num_update_steps_per_epoch = total_samples // (
num_gpus * BATCH_SIZE * STEPS
)
print("Steps: ",num_update_steps_per_epoch)
# Adjust max_steps based on last checkpoint
max_steps = last_checkpoint_step + num_update_steps_per_epoch
print(f"Updated max_steps: {max_steps}")
print("-------------------------------------------------------------")
resume = last_checkpoint is not None
if(FULLTUNE == False):
# Do model patching and add fast LoRA weights
model = FastLanguageModel.get_peft_model(
model,
r = 16,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = seed,
max_seq_length = MAXSEQ,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
args = SFTConfig(
max_seq_length = MAXSEQ,
#per_device_train_batch_size = BATCH_SIZE,
auto_find_batch_size=True, # <--- This enables automatic batch sizing
gradient_accumulation_steps = STEPS,
learning_rate = LRATE,
warmup_steps = 10,
logging_steps = 1,
output_dir = OUTPUT_DIR,
optim = OPTIMIZER,
num_train_epochs = EPOCHS,
seed = seed,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
resume_from_checkpoint = resume,
lr_scheduler_type = "linear",
)
trainer = SFTTrainer(
model = model,
dataset_text_field = "text",
packing = False,
train_dataset = train_dataset,
#eval_dataset = eval_dataset,
tokenizer = tokenizer,
args = args,
)
trainer = train_on_responses_only(
trainer,
instruction_part = "### Prompt:\n",
response_part = "### Response:\n"
)
#@title Show current memory stats
gpu_stats = torch.cuda.get_device_properties(0)
start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)
print(f"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.")
print(f"{start_gpu_memory} GB of memory reserved.")
checkpoint = None
if resume == True:
checkpoint = last_checkpoint
trainer_stats = trainer.train(resume_from_checkpoint=checkpoint)
#@title Show final memory and time stats
used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
used_memory_for_lora = round(used_memory - start_gpu_memory, 3)
used_percentage = round(used_memory /max_memory*100, 3)
lora_percentage = round(used_memory_for_lora/max_memory*100, 3)
print(f"{trainer_stats.metrics['train_runtime']} seconds used for training.")
print(f"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.")
print(f"Peak reserved memory = {used_memory} GB.")
print(f"Peak reserved memory for training = {used_memory_for_lora} GB.")
print(f"Peak reserved memory % of max memory = {used_percentage} %.")
print(f"Peak reserved memory for training % of max memory = {lora_percentage} %.")
print("Saving Model....")
#trainer.save(OUTPUT_DIR)
model.save_pretrained(OUTPUT_DIR)
tokenizer.save_pretrained(OUTPUT_DIR)
```
### train_tatsu.sh
```bash
#!/bin/sh
export MODEL="unsloth/Llama-3.2-1B-bnb-4bit"
export TRAIN_FILE="tatsu-lab/alpaca"
export OUTPUT_DIR="TATSU-LAB.ALPACA.LAMA3.2-1B-4bit"
export BATCH_SIZE=4
export OVERWRITE=True
export EPOCHS=1
export LRATE=2e-4
export STEPS=4
export SAVE_STEPS=-1
export LOAD_4BIT=True
export LOAD_8BIT=False
export FULLTUNE=False
export MAXSEQ=8192
export OPTIM="adamw_8bit"
python train.unsloth.alpaca.py
```
### run.unsloth.prompt.py
```python
import torch
import re
import gradio as gr
from unsloth import FastLanguageModel
from peft import PeftModel
from transformers import StoppingCriteria, StoppingCriteriaList
from transformers import TextStreamer
import sys
#######################################################################################################################
# MODELS
#######################################################################################################################
# 1️⃣ Specify your base model and the path to your saved LoRA adapter
ADAPTER_PATH = sys.argv[1]
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=ADAPTER_PATH,
max_seq_length=2048,
load_in_4bit=True
)
FastLanguageModel.for_inference(model)
# Custom stopping criteria to stop when the <|endoftext|> token is generated
class StopOnEndOfText(StoppingCriteria):
def __init__(self, eos_token_id):
self.eos_token_id = eos_token_id
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
# Check if the last token generated is the eos_token_id
return input_ids[0, -1] == self.eos_token_id
# Create an instance of the stopping criteria with the model's EOS token
eos_token_id = tokenizer.eos_token_id
stopping_criteria = StoppingCriteriaList([StopOnEndOfText(eos_token_id)])
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
textstreamer = TextStreamer(tokenizer, skip_prompt = True)
temperature = 0.7
top_p = 0.9
top_k = 50
# 4️⃣ Define generation function
def generate(prompt):
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
out = model.generate(
inputs["input_ids"],
attention_mask = inputs["attention_mask"],
streamer = textstreamer,
temperature=temperature,
top_p=top_p,
top_k=top_k,
pad_token_id=tokenizer.eos_token_id,
max_length=2048,
num_return_sequences=1,
#repetition_penalty=1.2,
no_repeat_ngram_size=3,
remove_invalid_values=True,
stopping_criteria=stopping_criteria
)
return tokenizer.decode(out[0], skip_special_tokens=True)
while 1:
print("### SYSTEM PROMPT ###")
print("Press CTRL+D to send.")
p = sys.stdin.read()
p = "### Prompt:\n" + '"' + p + '"\n\n### Response:\n'
output = generate(p)
``` |
FormlessAI/37927052-8425-4f1c-847d-52c1c791b394 | FormlessAI | 2025-05-27T08:54:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/llama-2-7b",
"base_model:finetune:unsloth/llama-2-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:32:13Z | ---
base_model: unsloth/llama-2-7b
library_name: transformers
model_name: 37927052-8425-4f1c-847d-52c1c791b394
tags:
- generated_from_trainer
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 37927052-8425-4f1c-847d-52c1c791b394
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/37927052-8425-4f1c-847d-52c1c791b394", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/bd1tx3nf)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/AceReason-Nemotron-7B-i1-GGUF | mradermacher | 2025-05-27T08:54:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nvidia/AceReason-Nemotron-7B",
"base_model:quantized:nvidia/AceReason-Nemotron-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-05-27T07:08:14Z | ---
base_model: nvidia/AceReason-Nemotron-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nvidia/AceReason-Nemotron-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AceReason-Nemotron-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AceReason-Nemotron-7B-i1-GGUF/resolve/main/AceReason-Nemotron-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LarryAIDraw/azurlane-taihou-ponyxl-lora-nochekaiser | LarryAIDraw | 2025-05-27T08:54:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-27T06:43:01Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/935581/taihou-azur-lane-commission |
aamijar/Llama-2-7b-hf-lora-r8-boolq-portlora-epochs1 | aamijar | 2025-05-27T08:53:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T08:53:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
muradmy/Meta-Llama-3.1-8B-Instruct-aris-v2-LORA | muradmy | 2025-05-27T08:53:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T08:53:14Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** muradmy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mesolitica/Malaysian-Qwen2.5-72B-Reasoning-SFT-v0.1 | mesolitica | 2025-05-27T08:53:19Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"region:us"
]
| null | 2025-05-20T05:54:06Z | ---
language:
- ms
- en
---
# Malaysian Qwen 2.5 72B Reasoning Instruct v0.1
Initial LoRA [mesolitica/Malaysian-Qwen2.5-72B-Instruct](mesolitica/Malaysian-Qwen2.5-72B-Instruct) on https://huggingface.co/datasets/mesolitica/Malaysian-Reasoning/commit/e1bb8a2141a1db351321d988687432d312495905 to introduce Malaysian reasoning.
This model been use to generate [mesolitica/Malaysian-Reasoning](https://huggingface.co/datasets/mesolitica/Malaysian-Reasoning) by using few shots prompts.
## Special thanks
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
LarryAIDraw/BismarckPDCAMEq6v1_1_AL | LarryAIDraw | 2025-05-27T08:53:01Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-27T06:43:32Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/866850/characterxl-pony-bismarck-azur-lane |
AI-ISL/DeepSeek-R1-Distill-Qwen-7B-SP | AI-ISL | 2025-05-27T08:52:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chain-of-thought",
"safety",
"alignment",
"reasoning",
"large-language-model",
"conversational",
"arxiv:2505.14667",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:24:17Z | ---
license: apache-2.0
tags:
- chain-of-thought
- safety
- alignment
- reasoning
- large-language-model
library_name: transformers
inference: true
---
# SAFEPATH-R-7B
This model is the **SAFEPATH-aligned version of DeepSeek-R1-Distill-Qwen-7B**, fine-tuned using prefix-only safety priming.
## Model Description
SAFEPATH applies a minimal alignment technique by inserting the phrase: *Let's think about safety first* (Safety Primer) at the beginning of the reasoning block. This encourages the model to engage in safer reasoning without reducing its reasoning performance.
- 🔐 **Improved Safety**: Reduces harmful outputs (e.g., StrongReject, BeaverTails) and is robust to jailbreak attacks
- 🧠 **Preserved Reasoning**: Maintains accuracy on MATH500, GPQA, and AIME24
- ⚡ **Efficiency**: Fine-tuned with only 100 steps
## Intended Use
This model is intended for research in:
- Safety alignment in Large Reasoning Models (LRMs)
- Robust reasoning under adversarial settings
- Chain-of-thought alignment studies
For details, see our [paper](https://arxiv.org/pdf/2505.14667).
## Overview Results
<p align="left">
<img src="https://github.com/AI-ISL/AI-ISL.github.io/blob/main/static/images/safepath/main_results.png?raw=true" width="800"/>
</p> |
phospho-app/freza44-gr00t-cube_N-dkgwpbq4o9 | phospho-app | 2025-05-27T08:48:33Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
]
| null | 2025-05-27T08:33:13Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [freza44/cube_N](https://huggingface.co/datasets/freza44/cube_N)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
alexxbobr/gpt2-medium-finetuned | alexxbobr | 2025-05-27T08:47:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T08:46:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
URSA-MATH/URSA-8B-PS-GRPO | URSA-MATH | 2025-05-27T08:45:48Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"ursa",
"text2text-generation",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:URSA-MATH/MMathCoT-1M",
"arxiv:2501.04686",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-24T17:41:16Z | ---
datasets:
- URSA-MATH/MMathCoT-1M
language:
- en
- zh
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
---
# URSA-8B-PS-GRPO
URSA-8B-PS-GRPO employs process-supervision grpo which proposed in our [paper](https://arxiv.org/pdf/2501.04686).
# Installation
```python
from huggingface_hub import snapshot_download
repo_id = "URSA-MATH/URSA-8B-PS-GRPO"
local_dir = YOUR_LOCAL_PATH
snapshot_path = snapshot_download(
repo_id=repo_id,
local_dir=local_dir,
revision="main",
cache_dir=None,
)
```
# Inference
We have adapted vLLM for URSA-8B. Please refer to the [GitHub](https://github.com/URSA-MATH/URSA-MATH) repository for quick inference implementation.
Besides, we have adapted evaluation on [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)!
# Citation
If you find our paper, model, or data helpful, please give this repo a star 🌟 and cite our article ✏️.
```
@article{luo2025ursa,
title={URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal Mathematics},
author={Luo, Ruilin and Zheng, Zhuofan and Wang, Yifan and Yu, Yiyao and Ni, Xinzhe and Lin, Zicheng and Zeng, Jin and Yang, Yujiu},
journal={arXiv preprint arXiv:2501.04686},
year={2025}
}
```
``` |
Dilankam/Dilanka | Dilankam | 2025-05-27T08:44:57Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T08:44:57Z | ---
license: apache-2.0
---
|
anirudhsrivastava/medgemma-4b-it-sft-lora-icmr-nirt-cxr | anirudhsrivastava | 2025-05-27T08:42:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T07:02:42Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-icmr-nirt-cxr
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-icmr-nirt-cxr
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="anirudhsrivastava/medgemma-4b-it-sft-lora-icmr-nirt-cxr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sergioalves/9a254802-a04c-4754-ba4b-902038f92f50 | sergioalves | 2025-05-27T08:39:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T07:59:33Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a254802-a04c-4754-ba4b-902038f92f50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 4d9f07a482367b19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/9a254802-a04c-4754-ba4b-902038f92f50
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/4d9f07a482367b19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 59c88e83-96bc-4ad9-b0c3-3ab5195eb8cf
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 59c88e83-96bc-4ad9-b0c3-3ab5195eb8cf
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 9a254802-a04c-4754-ba4b-902038f92f50
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5821 | 0.0003 | 1 | 1.5492 |
| 1.5213 | 0.0710 | 250 | 1.2733 |
| 1.1951 | 0.1420 | 500 | 1.1918 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
galennolan/indobertweet-indoemotion-5class | galennolan | 2025-05-27T08:39:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"emotion-classification",
"indonesian",
"indobertweet",
"id",
"dataset:PRDECT-ID",
"base_model:Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis",
"base_model:finetune:Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T08:12:23Z | ---
license: apache-2.0
language:
- id
library_name: transformers
tags:
- text-classification
- sentiment-analysis
- emotion-classification
- indonesian
- indobertweet
datasets:
- PRDECT-ID
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis
---
# IndoBERTweet untuk Klasifikasi Emosi Bahasa Indonesia (5 Label)
Model ini merupakan hasil *fine-tune lanjutan* dari [`Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis`](https://huggingface.co/Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis).
Awalnya model tersebut hanya mengenali 3 sentimen: **positive**, **negative** dan **neutral**.
Sekarang model ini dikembangkan lebih lanjut untuk mengenali **lima jenis emosi** dalam teks Bahasa Indonesia:
- `anger `
- `fear `
- `happy `
- `love `
- `sadness`
## 🎯 Tujuan
Model ini cocok digunakan untuk analisis emosi pada:
- Ulasan produk
- Komentar sosial media
- Respon pengguna aplikasi
- Teks pendek lain yang ditulis dalam Bahasa Indonesia
## Tentang Dataset
Fine-tuning menggunakan dataset PRDECT-ID (Produk Review Dataset for Emotion Classification Task - Indonesia). Dataset ini berisi ulasan produk berbahasa Indonesia dengan label emosi yang dideskripsikan sebagai berikut:
| Emosi | Deskripsi | Contoh |
|--------|-------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| anger | Mengandung kata-kata marah, komplain, kata kasar, tanda baca kapital | *"Barang jelek!!! tiga hari sudah pada lepas pinggirnya, barang mahal tapi kualitasnya jelek banget"* |
| fear | Mengandung kalimat peringatan, keraguan, pertanyaan terhadap produk/penjual/pengiriman | *"Saya sarankan buat video unboxing, hidupkan langsung dan instal CPU Z."* |
| happy | Pujian, ekspresi puas, bangga terhadap produk/penjual | *"Mantap adminnya selalu merhatiin pembeli. Respect, proses super cepat, sampai juga cepat, barang sesuai."* |
| love | Ekspresi cinta atau suka berlebihan, pujian kuat pada produk/penjual | *"Produknya bagus dan sukaaakkk banget!!!"* |
| sadness| Mengekspresikan kekecewaan, penyesalan terhadap produk | *"Sangat kecewa, phone holder tidak lengkap, packing cuma pakai keresek hitam."* |
Tabel berikut menunjukkan performa model pada set validasi per epoch:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 (Macro) | Precision (Macro) | Recall (Macro) |
|-------|---------------|-----------------|----------|------------|-------------------|----------------|
| 1 | 0.850000 | 0.628058 | 0.7167 | 0.7115 | 0.7177 | 0.7167 |
| 2 | 0.649600 | 0.674608 | 0.7259 | 0.7253 | 0.7466 | 0.7259 |
| 3 | 0.558100 | 0.655473 | 0.7444 | 0.7449 | 0.7599 | 0.7444 |
| 4 | 0.476800 | 0.712344 | 0.7444 | 0.7425 | 0.7526 | 0.7444 |
| 5 | 0.414400 | 0.805933 | 0.7370 | 0.7384 | 0.7466 | 0.7370 |
| 6 | 0.345500 | 0.907782 | 0.7444 | 0.7452 | 0.7471 | 0.7444 |
| 7 | 0.311500 | 0.991595 | 0.7278 | 0.7257 | 0.7263 | 0.7278 |
| 8 | 0.257800 | 1.177693 | 0.7222 | 0.7197 | 0.7219 | 0.7222 |
| 9 | 0.232200 | 1.227367 | 0.7407 | 0.7400 | 0.7403 | 0.7407 |
| 10 | 0.219800 | 1.273331 | 0.7444 | 0.7443 | 0.7459 | 0.7444 |
**Catatan tentang Performa:**
Berdasarkan hasil di atas, *validation loss* mulai meningkat setelah epoch ke-3, yang mengindikasikan potensi *overfitting*. Kinerja terbaik (berdasarkan F1-Score tertinggi pada set validasi sebelum *validation loss* meningkat signifikan) diamati pada **Epoch 3** (F1: 0.7449, Accuracy: 0.7444, Validation Loss: 0.655473) atau **Epoch 6** (F1: 0.7452, Accuracy: 0.7444, Validation Loss: 0.907782) jika F1-Score yang menjadi fokus utama meskipun *validation loss* sudah lebih tinggi. Pengguna disarankan untuk mengevaluasi *checkpoint* dari epoch-epoch tersebut atau melakukan *fine-tuning* lebih lanjut dengan strategi mitigasi *overfitting* (seperti yang didiskusikan dalam penelitian terkait model ini).
## 🔍 Contoh Penggunaan
Contoh penggunaan model untuk klasifikasi emosi menggunakan Hugging Face `pipeline`:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="galennolan/indobertweet-indoemotion-5class")
text = "Produknya bagus tapi pengiriman lama."
hasil = classifier(text)
print(hasil)
# [{'label': 'anger', 'score': ...}]
# Decode label index
label_id = int(hasil[0]['label'].split('_')[-1])
print("Emotion:", le.inverse_transform([label_id])[0]) |
MalvinasMan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar | MalvinasMan | 2025-05-27T08:38:57Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am arctic opaque jaguar",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-22T17:41:41Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am arctic opaque jaguar
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SimFonX/whisper-onnx-optimized | SimFonX | 2025-05-27T08:37:48Z | 0 | 0 | null | [
"automatic-speech-recognition",
"en",
"de",
"fr",
"es",
"pt",
"it",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"region:us"
]
| automatic-speech-recognition | 2025-05-22T07:53:23Z | ---
license: mit
language:
- en
- de
- fr
- es
- pt
- it
base_model:
- openai/whisper-small
pipeline_tag: automatic-speech-recognition
---
# Whisper ONNX Optimized Models
Optimized Whisper ONNX models packaged for easy deployment. Each zip contains all necessary files for inference.
## Models Available
| Model | Language | Size | Target Use | Download |
|-------|----------|------|------------|----------|
| **Small English** | English-only | 107MB | Fast English transcription | [whisper-small-en-onnx.zip](small-en/whisper-small-en-onnx.zip) |
| **Small Multilingual** | 99 languages | 245MB | Fast multilingual transcription | [whisper-small-multilingual-onnx.zip](small-multilingual/whisper-small-multilingual-onnx.zip) |
| **Medium English** | English-only | 247MB | High quality English transcription | [whisper-medium-en-onnx.zip](medium-en/whisper-medium-en-onnx.zip) |
| **Medium Multilingual** | 99 languages | 602MB | High quality multilingual | [whisper-medium-multilingual-onnx.zip](medium-multilingual/whisper-medium-multilingual-onnx.zip) |
| **Large v3 Turbo** | 99 languages | 646MB | Best quality, fastest large model | [whisper-large-v3-turbo-onnx.zip](large-v3-turbo/whisper-large-v3-turbo-onnx.zip) |
## Contents of Each Zip
Each zip file contains 6 files needed for inference:
### ONNX Model Files
- `encoder_model_quantized.onnx` - Audio encoder (processes mel spectrograms)
- `decoder_with_past_model_quantized.onnx` - Text decoder (generates transcription), optimized decoder with KV caching
### Configuration Files
- `config.json` - Model configuration
- `generation_config.json` - Generation parameters
- `preprocessor_config.json` - Audio preprocessing settings
- `tokenizer.json` - Tokenizer vocabulary
## Model Sources
These models are repackaged from:
- [Distil-Whisper](https://huggingface.co/distil-whisper) (English models)
- [ONNX Community](https://huggingface.co/onnx-community) (Multilingual models)
## License
Models inherit their original licenses:
- Distil-Whisper models: MIT License
- Whisper models: MIT License
## Version History
- **v1.0.0** - Initial release with 5 optimized models |
nguyenvuvn/aug-ques-0104e2 | nguyenvuvn | 2025-05-27T08:36:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T08:04:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cloudmaster/Llama-3.2-3B-torchao-final-woclass | Cloudmaster | 2025-05-27T06:24:28Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
]
| text-generation | 2025-05-27T06:22:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kappH/NhanDienKhoiLua | kappH | 2025-05-27T06:19:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T02:07:33Z | ---
license: apache-2.0
---
|
gaianet/SeaLLMs-Audio-7B-GGUF | gaianet | 2025-05-27T06:19:36Z | 0 | 0 | null | [
"gguf",
"qwen2_audio",
"seallms-audio",
"speech",
"audio",
"SEA",
"audio-text-to-text",
"en",
"zh",
"id",
"vi",
"th",
"base_model:SeaLLMs/SeaLLMs-Audio-7B",
"base_model:quantized:SeaLLMs/SeaLLMs-Audio-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| audio-text-to-text | 2025-05-27T02:59:58Z | ---
base_model: SeaLLMs/SeaLLMs-Audio-7B
license: other
license_name: seallms
license_link: LICENSE
model_creator: SeaLLMs
model_name: SeaLLMs-Audio-7B
quantized_by: Second State Inc.
language:
- en
- zh
- id
- vi
- th
pipeline_tag: audio-text-to-text
tags:
- seallms-audio
- speech
- audio
- SEA
---
# SeaLLMs-Audio-7B-GGUF
## Original Model
[SeaLLMs/SeaLLMs-Audio-7B](https://huggingface.co/SeaLLMs/SeaLLMs-Audio-7B)
## Run with Gaianet
**Prompt template**
prompt template: coming soon
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b5501* |
dqj5182/CONTHO | dqj5182 | 2025-05-27T06:17:34Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2025-05-27T06:17:34Z | ---
license: cc-by-nc-sa-4.0
---
|
FormlessAI/d54bdef0-f416-477c-b30f-6a4195439496 | FormlessAI | 2025-05-27T06:15:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T00:20:09Z | ---
base_model: HuggingFaceH4/zephyr-7b-beta
library_name: transformers
model_name: d54bdef0-f416-477c-b30f-6a4195439496
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for d54bdef0-f416-477c-b30f-6a4195439496
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/d54bdef0-f416-477c-b30f-6a4195439496", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/bup39jc8)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ahariyan/AH | ahariyan | 2025-05-27T06:14:45Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2025-05-27T06:14:44Z | ---
license: bigscience-bloom-rail-1.0
---
|
Huzaifah0/TTS_2_16 | Huzaifah0 | 2025-05-27T06:14:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:25:11Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
obeskay/lvstock | obeskay | 2025-05-27T06:14:16Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T05:56:50Z | ---
license: apache-2.0
---
|
sergioalves/efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d | sergioalves | 2025-05-27T06:10:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T04:57:49Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c3dc1221f780d83b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/c3dc1221f780d83b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a60bff33-b218-420b-8df6-798d74a1449e
wandb_project: s56-7
wandb_run: your_name
wandb_runid: a60bff33-b218-420b-8df6-798d74a1449e
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0575 | 0.0001 | 1 | 1.2561 |
| 1.194 | 0.0171 | 250 | 1.1703 |
| 0.9468 | 0.0341 | 500 | 1.1336 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rosadecsai/led-large-16384-finetune-paperLedWeSAttG_ACE0.1 | rosadecsai | 2025-05-27T06:08:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"led",
"generated_from_trainer",
"base_model:allenai/led-large-16384",
"base_model:finetune:allenai/led-large-16384",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T08:08:07Z | ---
library_name: transformers
license: apache-2.0
base_model: allenai/led-large-16384
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-large-16384-finetune-paperLedWeSAttG_ACE0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-finetune-paperLedWeSAttG_ACE0.1
This model is a fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9127
- Rouge1: 40.3846
- Rouge2: 10.0386
- Rougel: 18.0769
- Rougelsum: 38.4615
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.9473 | 0.9993 | 1128 | 3.0214 | 35.2087 | 10.5647 | 17.4229 | 33.7568 | 1.0 |
| 2.7892 | 1.9993 | 2256 | 2.9281 | 29.3103 | 8.0614 | 13.2184 | 28.1609 | 1.0 |
| 2.6667 | 2.9993 | 3384 | 2.9127 | 40.3846 | 10.0386 | 18.0769 | 38.4615 | 1.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
LocalDoc/azerbaijani_spelling_corrector | LocalDoc | 2025-05-27T06:08:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T06:08:12Z | ---
license: apache-2.0
---
|
zfdev/squad_v2-16bit-gemma-3-4b-it | zfdev | 2025-05-27T06:07:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:57:33Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** zfdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Huzaifah0/TTS_1_16 | Huzaifah0 | 2025-05-27T06:06:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:16:47Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ageng91/Airdrop_Hunter | Ageng91 | 2025-05-27T06:05:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T06:05:56Z | ---
license: apache-2.0
---
|
h34v7/DXP-Zero-V1.2-24b-Small-Instruct-GGUF | h34v7 | 2025-05-27T06:05:18Z | 32 | 0 | null | [
"gguf",
"base_model:h34v7/DXP-Zero-V1.2-24b-Small-Instruct",
"base_model:quantized:h34v7/DXP-Zero-V1.2-24b-Small-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-25T06:48:49Z | ---
license: apache-2.0
base_model:
- h34v7/DXP-Zero-V1.2-24b-Small-Instruct
---
These are non-imatrix FP16 [here](https://huggingface.co/h34v7/DXP-Zero-V1.2-24b-Small-Instruct). i1-GGUF [here](https://huggingface.co/h34v7/DXP-Zero-V1.2-24b-Small-Instruct-i1-GGUF).
The Q2_K is pretty good like 7B models. |
EnterNameBros/anime-senko-chat | EnterNameBros | 2025-05-27T06:03:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T06:35:08Z | ---
library_name: transformers
license: mit
base_model: microsoft/DialoGPT-medium
tags:
- generated_from_trainer
model-index:
- name: anime-senko-chat
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anime-senko-chat
This model is a fine-tuned version of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1 |
18-Sophie-Rain-SpiderMan-Video/Sophie.Rain.Spiderman.New.Video.Tutorial.Official | 18-Sophie-Rain-SpiderMan-Video | 2025-05-27T06:02:54Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T05:43:46Z | 18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L𝚎aked Video Sophie Rain Spiderman Video Leaked Original Video ᴠɪʀᴀʟ Video L𝚎aked on X Twitter Telegram
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
— Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L𝚎aked Video Sophie Rain Spiderman Video Leaked Original Video ᴠɪʀᴀʟ Video L𝚎aked on X Twitter Telegram
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
— Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L𝚎aked Video Sophie Rain Spiderman Video Leaked Original Video ᴠɪʀᴀʟ Video L𝚎aked on X Twitter Telegram
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
— Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L𝚎aked Video Sophie Rain Spiderman Video Leaked Original Video ᴠɪʀᴀʟ Video L𝚎aked on X Twitter Telegram
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
— Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L𝚎aked Video Sophie Rain Spiderman Video Leaked Original Video ᴠɪʀᴀʟ Video L𝚎aked on X Twitter Telegram
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video ᴠɪʀᴀʟ On Social Media X Trending Now
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
— Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
|
abhi26/Graph_PRefLexOR_Phase_I_results_2 | abhi26 | 2025-05-27T06:01:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T23:41:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eymericboyer/MNLP_M2_mcqa_model | eymericboyer | 2025-05-27T05:59:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:58:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/medium_high-medical-4e-05-4000-mcq0-qwen3_32b | thejaminator | 2025-05-27T05:57:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T04:53:54Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Videos-CCTV-wiring-cikgu/Bocor.Video.CCTV.wiring.cikgu.video.nur.fadhilah.binti.zainal.guru.part.2.video | Videos-CCTV-wiring-cikgu | 2025-05-27T05:55:03Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T05:48:24Z | Watch 🟢 ➤ ➤ ➤ <a href="https://blackcloudz.com/cikgu-cctv-wiring-video"> 🌐 Click Here To link (CCTV.wiring.cikgu.video.nur.fadhilah.binti.zainal.guru.part.2.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://blackcloudz.com/cikgu-cctv-wiring-video"> 🌐 CCTV.wiring.cikgu.video.nur.fadhilah.binti.zainal.guru.part.2.video |
subha290/granite-3.3-2b-finetuned | subha290 | 2025-05-27T05:54:42Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:adapter:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T11:02:27Z | ---
library_name: peft
license: apache-2.0
base_model: ibm-granite/granite-3.3-2b-instruct
tags:
- generated_from_trainer
model-index:
- name: granite-3.3-2b-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# granite-3.3-2b-finetuned
This model is a fine-tuned version of [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3672 | 0.5313 | 250 | 2.3525 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
Datle1610/qwen-3b-metaqa-chat | Datle1610 | 2025-05-27T05:52:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:51:16Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DukeDDrake1999/openai-whisper-medium-Khmer-dynamo-one | DukeDDrake1999 | 2025-05-27T05:49:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T05:49:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bigband/FriendlyJarilo | bigband | 2025-05-27T05:49:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:41:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
DngBack/SunDomain_4b_250527 | DngBack | 2025-05-27T05:49:04Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T04:11:53Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
lisabdunlap/Qwen3-8B-base-ptse-pt-1e4_e2 | lisabdunlap | 2025-05-27T05:46:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:45:45Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yaafer/merged-chart_llama | Yaafer | 2025-05-27T05:44:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama_text_model",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T05:42:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtl-llm/qwen2.5coder-7b-origen-vhdl-verilog | rtl-llm | 2025-05-27T05:44:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:41:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nnilayy/deap-valence-binary-classification-no-wd-Kfold-5 | nnilayy | 2025-05-27T05:40:42Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-05-27T05:40:40Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Hanan-1tx/Cat_Dog_Classifier | Hanan-1tx | 2025-05-27T05:40:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T05:40:35Z | ---
license: apache-2.0
---
|
btly/flut | btly | 2025-05-27T05:40:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:27:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
StrangeSX/NNN-BNFT-64-0036-v4_fnec | StrangeSX | 2025-05-27T05:36:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-27T05:35:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lisabdunlap/balanced_sft_long-1e4 | lisabdunlap | 2025-05-27T05:35:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:34:27Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dhruvsangani/Multilingual-sentiment-Banking_Customer_Support-GGUF | dhruvsangani | 2025-05-27T05:34:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T05:33:47Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/Qwen3-8B-base-5e5_e2 | lisabdunlap | 2025-05-27T05:32:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:31:40Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/Qwen3-8B-base-pt-5e5_e5 | lisabdunlap | 2025-05-27T05:30:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:29:35Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/balanced_sft_long_e2 | lisabdunlap | 2025-05-27T05:29:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:28:12Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zoya-hammadk/nutrivision-roberta-classification | zoya-hammadk | 2025-05-27T05:29:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-27T04:50:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
basebala/Qwen2-0.5B-SFT | basebala | 2025-05-27T05:27:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen2-0.5B",
"base_model:finetune:Qwen/Qwen2-0.5B",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T04:40:22Z | ---
base_model: Qwen/Qwen2-0.5B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen2-0.5B-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2-0.5B-SFT
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="basebala/Qwen2-0.5B-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dhruvsangani/Multilingual-sentiment-Banking_Customer_Support | dhruvsangani | 2025-05-27T05:26:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T05:26:18Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NicoHelemon/MNLP_M2_mcqa_model_cot02 | NicoHelemon | 2025-05-27T05:26:06Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"qwen3",
"unsloth",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T14:12:03Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/qwen3-0.6b-base-unsloth-bnb-4bit
tags:
- unsloth
- generated_from_trainer
model-index:
- name: MNLP_M2_mcqa_model_cot02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M2_mcqa_model_cot02
This model is a fine-tuned version of [unsloth/qwen3-0.6b-base-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-0.6b-base-unsloth-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
eunkey/clip-vit-h-14-polaris-1to5-best | eunkey | 2025-05-27T05:25:35Z | 0 | 0 | null | [
"pytorch",
"region:us"
]
| null | 2025-05-27T05:22:47Z |
# CLIP ViT-H-14 Fine-tuned on Polaris Dataset
This model is a fine-tuned version of the CLIP ViT-H-14 model on the Polaris dataset. The model was trained using one-to-one image-text pairs.
## Model Details
- Base Model: CLIP ViT-H-14
- Dataset: Polaris
- Training Mode: One-to-one image-text pairs
- Architecture: Vision Transformer (ViT) with CLIP text encoder
## Usage
```python
import torch
import open_clip
from PIL import Image
# Load model
model, _, preprocess = open_clip.create_model_and_transforms('ViT-H-14')
model.load_state_dict(torch.load('pytorch_model.bin'))
model.eval()
# Prepare image and text
image = Image.open('your_image.jpg')
image = preprocess(image).unsqueeze(0)
text = "your text description"
# Get embeddings
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
# Normalize features
image_features = image_features / image_features.norm(dim=-1, keepdim=True)
text_features = text_features / text_features.norm(dim=-1, keepdim=True)
# Calculate similarity
similarity = (image_features @ text_features.t()).item()
```
|
chihanchou/q-FrozenLake-v1-4x4-noSlippery | chihanchou | 2025-05-27T05:24:30Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-27T05:23:34Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chihanchou/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
exiort/loss_func | exiort | 2025-05-27T05:24:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
]
| null | 2025-05-27T05:24:13Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
jaimevera1107/moderation-topics | jaimevera1107 | 2025-05-27T05:22:51Z | 47 | 1 | bertopic | [
"bertopic",
"text-classification",
"region:us"
]
| text-classification | 2023-11-10T04:56:40Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# moderation-topics
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("jaimevera1107/moderation-topics")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 94
* Number of training documents: 1403
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | suicide - nssi - tendency - recent - self | 40 | 0_suicide_nssi_tendency_recent |
| 1 | exposed - minimal - sexualized - possessing - performs | 33 | 1_exposed_minimal_sexualized_possessing |
| 2 | drug - reference - purposes - substances - substance | 32 | 2_drug_reference_purposes_substances |
| 3 | regulated - consumption - tobacco - relate - associate | 31 | 3_regulated_consumption_tobacco_relate |
| 4 | male - region - pubic - exposure - nipple | 31 | 4_male_region_pubic_exposure |
| 5 | testing - wildlife - endangered - poaching - hunting | 31 | 5_testing_wildlife_endangered_poaching |
| 6 | nudity - fine - implied - documentaries - indigenous | 30 | 6_nudity_fine_implied_documentaries |
| 7 | text - language - pickup - textual - texts | 28 | 7_text_language_pickup_textual |
| 8 | fighting - incitement - violent - reactive - event | 27 | 8_fighting_incitement_violent_reactive |
| 9 | hate - ideology - hateful - based - disability | 27 | 9_hate_ideology_hateful_based |
| 10 | sensual - pleasure - demonstration - objectification - dialogue | 26 | 10_sensual_pleasure_demonstration_objectification |
| 11 | detailing - stimulation - fetishism - allusion - adults | 26 | 11_detailing_stimulation_fetishism_allusion |
| 12 | pornography - vulgarity - website - tapes - softcore | 26 | 12_pornography_vulgarity_website_tapes |
| 13 | lead - highly - is - imitable - professionals | 25 | 13_lead_highly_is_imitable |
| 14 | brand - code - csam - qr - multiple | 25 | 14_brand_code_csam_qr |
| 15 | expressions - dance - performing - performances - express | 24 | 15_expressions_dance_performing_performances |
| 16 | intellectual - copyright - copyrighted - stolen - cover | 24 | 16_intellectual_copyright_copyrighted_stolen |
| 17 | slur - slurs - designation - remarks - status | 24 | 17_slur_slurs_designation_remarks |
| 18 | undressing - striptease - process - panties - voyeuristic | 23 | 18_undressing_striptease_process_panties |
| 19 | workplace - peeping - upskirting - tom - coercion | 23 | 19_workplace_peeping_upskirting_tom |
| 20 | hostility - degradation - statement - discriminatory - characteristics | 23 | 20_hostility_degradation_statement_discriminatory |
| 21 | low - quality - organic - host - grow | 22 | 21_low_quality_organic_host |
| 22 | terrorist - terrorism - recruitment - organizations - international | 21 | 22_terrorist_terrorism_recruitment_organizations |
| 23 | spam - jump - makeup - scary - scare | 20 | 23_spam_jump_makeup_scary |
| 24 | firearms - ammunition - explosive - explosives - weapons | 20 | 24_firearms_ammunition_explosive_explosives |
| 25 | culturally - appropriate - wear - protected - not | 19 | 25_culturally_appropriate_wear_protected |
| 26 | disturbing - cannibalism - disgusting - coverage - anatomy | 18 | 26_disturbing_cannibalism_disgusting_coverage |
| 27 | homicide - mutilated - death - accident - torture | 18 | 27_homicide_mutilated_death_accident |
| 28 | privacy - invasion - surveillance - espionage - confidential | 18 | 28_privacy_invasion_surveillance_espionage |
| 29 | age - requirement - signals - identifiers - admission | 18 | 29_age_requirement_signals_identifiers |
| 30 | framing - gaze - angles - piercings - camera | 17 | 30_framing_gaze_angles_piercings |
| 31 | stalking - doxing - lists - encourage - addresses | 17 | 31_stalking_doxing_lists_encourage |
| 32 | damage - destruction - property - arson - vandalism | 17 | 32_damage_destruction_property_arson |
| 33 | eating - disorders - disorder - eat - loss | 16 | 33_eating_disorders_disorder_eat |
| 34 | bullying - statements - cyberbullying - vulnerable - users | 16 | 34_bullying_statements_cyberbullying_vulnerable |
| 35 | scams - frauds - scamming - schemes - fraudulent | 16 | 35_scams_frauds_scamming_schemes |
| 36 | criminal - crime - criminals - gang - burglary | 15 | 36_criminal_crime_criminals_gang |
| 37 | identifiable - data - personally - reveal - others | 15 | 37_identifiable_data_personally_reveal |
| 38 | work - sex - prostitution - workers - escort | 15 | 38_work_sex_prostitution_workers |
| 39 | conspiracy - theories - disinformation - baseless - current | 14 | 39_conspiracy_theories_disinformation_baseless |
| 40 | consensual - recording - blackmail - intention - displaying | 14 | 40_consensual_recording_blackmail_intention |
| 41 | child - featuring - pedophilic - defense - intimate | 14 | 41_child_featuring_pedophilic_defense |
| 42 | polarization - opposing - social - incite - deepen | 14 | 42_polarization_opposing_social_incite |
| 43 | pedophilia - grooming - normalization - predators - normalizing | 14 | 43_pedophilia_grooming_normalization_predators |
| 44 | platforms - direction - ads - third - party | 14 | 44_platforms_direction_ads_third |
| 45 | products - items - enhancement - grafitication - demonstrations | 13 | 45_products_items_enhancement_grafitication |
| 46 | possession - consuming - drinking - tobacco - smoking | 13 | 46_possession_consuming_drinking_tobacco |
| 47 | credible - threats - menacing - aggressive - plans | 12 | 47_credible_threats_menacing_aggressive |
| 48 | hacking - malware - phishing - ransomware - hacks | 12 | 48_hacking_malware_phishing_ransomware |
| 49 | proxy - lgbtq - bully - harassment - trolling | 12 | 49_proxy_lgbtq_bully_harassment |
| 50 | going - live - 13 - 18 - u18 | 12 | 50_going_live_13_18 |
| 51 | unintentionally - genitalia - animals - pornographic - bestiality | 12 | 51_unintentionally_genitalia_animals_pornographic |
| 52 | artificial - traffic - way - methods - generate | 12 | 52_artificial_traffic_way_methods |
| 53 | slaughter - mutilation - humans - dead - animal | 12 | 53_slaughter_mutilation_humans_dead |
| 54 | goods - gangs - organized - counterfeit - illicit | 11 | 54_goods_gangs_organized_counterfeit |
| 55 | gambling - betting - cheating - game - devices | 11 | 55_gambling_betting_cheating_game |
| 56 | trafficking - forced - coerced - traded - function | 11 | 56_trafficking_forced_coerced_traded |
| 57 | unsolicited - messages - favors - requests - advances | 11 | 57_unsolicited_messages_favors_requests |
| 58 | blood - gore - shock - bloodshed - value | 11 | 58_blood_gore_shock_bloodshed |
| 59 | victim - abduction - vehicle - motor - glorification | 11 | 59_victim_abduction_vehicle_motor |
| 60 | inappropriate - kiss - sexualizes - objectifies - towards | 10 | 60_inappropriate_kiss_sexualizes_objectifies |
| 61 | toddlers - infants - unintentional - touch - abdomen | 10 | 61_toddlers_infants_unintentional_touch |
| 62 | traditional - traditions - sacred - cultural - misappropriation | 10 | 62_traditional_traditions_sacred_cultural |
| 63 | nuclear - weapon - peaceful - advocating - energy | 9 | 63_nuclear_weapon_peaceful_advocating |
| 64 | exploiting - child - marriage - exploitation - labor | 9 | 64_exploiting_child_marriage_exploitation |
| 65 | impersonation - famous - figure - slandering - profiles | 9 | 65_impersonation_famous_figure_slandering |
| 66 | defamation - someones - defamatory - allegations - businesses | 9 | 66_defamation_someones_defamatory_allegations |
| 67 | recipes - creating - may - tools - instructions | 9 | 67_recipes_creating_may_tools |
| 68 | election - interference - campaigns - misinformation - political | 9 | 68_election_interference_campaigns_misinformation |
| 69 | claims - expertise - apocalypse - authority - media | 9 | 69_claims_expertise_apocalypse_authority |
| 70 | featuring - nude - partial - implied - depictions | 8 | 70_featuring_nude_partial_implied |
| 71 | operations - police - military - enforcement - law | 8 | 71_operations_police_military_enforcement |
| 72 | tax - laundering - crimes - money - ponzi | 8 | 72_tax_laundering_crimes_money |
| 73 | cosmetic - surgery - procedures - diy - unlicensed | 8 | 73_cosmetic_surgery_procedures_diy |
| 74 | subject - optical - innuendos - illusion - suggestive | 8 | 74_subject_optical_innuendos_illusion |
| 75 | bodies - fantasy - lifeless - accident - fictional | 8 | 75_bodies_fantasy_lifeless_accident |
| 76 | controversial - constructive - politics - issues - discussion | 7 | 76_controversial_constructive_politics_issues |
| 77 | kissing - lip - only - greeting - as | 7 | 77_kissing_lip_only_greeting |
| 78 | pirated - plagiarism - incites - glorifies - first | 7 | 78_pirated_plagiarism_incites_glorifies |
| 79 | mental - conditions - health - mocks - stigmatization | 7 | 79_mental_conditions_health_mocks |
| 80 | daredevil - reckless - precautions - risking - caution | 7 | 80_daredevil_reckless_precautions_risking |
| 81 | pranks - intentions - cybersecurity - harmful - targeted | 7 | 81_pranks_intentions_cybersecurity_harmful |
| 82 | dark - web - underground - marketplaces - glorifies | 6 | 82_dark_web_underground_marketplaces |
| 83 | vax - anti - medical - false - misinformation | 6 | 83_vax_anti_medical_false |
| 84 | sports - danger - adventures - stunts - professional | 6 | 84_sports_danger_adventures_stunts |
| 85 | environmental - pollution - experiments - ecosystems - natural | 6 | 85_environmental_pollution_experiments_ecosystems |
| 86 | incest - incestuous - taboo - themes - discussion | 5 | 86_incest_incestuous_taboo_themes |
| 87 | neglect - child - endangerment - abuse - physical | 5 | 87_neglect_child_endangerment_abuse |
| 88 | radicalization - extremist - extremism - views - propaganda | 5 | 88_radicalization_extremist_extremism_views |
| 89 | waste - bodily - excretion - unsanitary - images | 5 | 89_waste_bodily_excretion_unsanitary |
| 90 | emotional - psychological - mind - gaslighting - relationships | 5 | 90_emotional_psychological_mind_gaslighting |
| 91 | solicitation - offer - request - prostitution - act | 5 | 91_solicitation_offer_request_prostitution |
| 92 | elderly - elders - elder - neglect - against | 5 | 92_elderly_elders_elder_neglect |
| 93 | education - terms - term - relating - general | 4 | 93_education_terms_term_relating |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.4
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.24.0
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
EdBerg/gemma-3 | EdBerg | 2025-05-27T05:22:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T23:40:09Z | ---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EdBerg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/Qwen3-8B-base-ptse-pt-1e4_e1 | lisabdunlap | 2025-05-27T05:19:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:18:43Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Simisiki276/Simisiki276 | Simisiki276 | 2025-05-27T05:19:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T05:19:32Z | ---
license: apache-2.0
---
|
aledm03/SFT_third_try | aledm03 | 2025-05-27T05:10:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:finetune:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:09:33Z | ---
base_model: unsloth/Qwen3-0.6B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aledm03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits