modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-21 06:31:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 567
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-21 06:30:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hafidhsoekma/unsloth-Qwen3-1_7B-unsloth-bnb-4bit-method_SFT
|
hafidhsoekma
| 2025-09-15T08:16:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:40:40Z |
---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hafidhsoekma
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
satishchawan/trial_hai_bhai
|
satishchawan
| 2025-09-15T08:13:58Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2025-09-15T08:09:55Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seychelles1119/A.X-4.0-Light-adapter
|
seychelles1119
| 2025-09-15T08:13:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T08:12:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
maidacundo/annie-lite-v0.3.1-ckpt-500-qwen3-8b
|
maidacundo
| 2025-09-15T08:12:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T08:06:38Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bol4587/distilbert-base-uncased-finetuned-imdb
|
bol4587
| 2025-09-15T08:11:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-09-15T07:41:27Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4892
- Model Preparation Time: 0.0016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| 2.6814 | 1.0 | 157 | 2.4929 | 0.0016 |
| 2.5825 | 2.0 | 314 | 2.4480 | 0.0016 |
| 2.5258 | 3.0 | 471 | 2.4823 | 0.0016 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
ELHSI/llama-3.1-8bi-ft-dx-ru-mas-v1
|
ELHSI
| 2025-09-15T08:07:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T08:07:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Oshadha-Emojot/model_16bit
|
Oshadha-Emojot
| 2025-09-15T08:07:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T08:04:00Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Oshadha-Emojot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ryanbuccellatowandb/gemma3-owl-baseline-1
|
ryanbuccellatowandb
| 2025-09-15T08:06:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T08:06:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QizhiPei/3d-molt5-base
|
QizhiPei
| 2025-09-15T08:05:51Z | 0 | 0 | null |
[
"pytorch",
"t5",
"biology",
"chemistry",
"en",
"arxiv:2406.05797",
"license:mit",
"region:us"
] | null | 2025-09-01T08:12:18Z |
---
license: mit
language:
- en
tags:
- biology
- chemistry
---
## 3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text Modeling
For more information, please refer to our paper and GitHub repository.
Paper: [arxiv](https://arxiv.org/abs/2406.05797), [openreview](https://openreview.net/forum?id=eGqQyTAbXC)
GitHub: [3D-MolT5](https://github.com/QizhiPei/3D-MolT5)
Authors: *Qizhi Pei, Rui Yan, Kaiyuan Gao, Jinhua Zhu and Lijun Wu*
|
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-0.001-v3_2845
|
luckeciano
| 2025-09-15T08:05:05Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T03:27:56Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-0.001-v3_2845
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-0.001-v3_2845
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-0.001-v3_2845", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/w0yuoq23)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
maidacundo/annie-lite-v0.3.1-ckpt-500-lora
|
maidacundo
| 2025-09-15T08:03:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"grpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"region:us"
] |
text-generation
| 2025-09-15T08:02:42Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen3-8B-unsloth-bnb-4bit
- grpo
- lora
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
pamrnd/Pam-Monkey
|
pamrnd
| 2025-09-15T08:03:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T08:00:22Z |
---
license: apache-2.0
---
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757923279
|
svarekagerp
| 2025-09-15T08:02:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T08:02:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF
|
mradermacher
| 2025-09-15T08:00:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:zjuxhl/Llama3.1-8B-NuminaMath-bridge",
"base_model:quantized:zjuxhl/Llama3.1-8B-NuminaMath-bridge",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T07:25:55Z |
---
base_model: zjuxhl/Llama3.1-8B-NuminaMath-bridge
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/zjuxhl/Llama3.1-8B-NuminaMath-bridge
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama3.1-8B-NuminaMath-bridge-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AXERA-TECH/Qwen2.5-VL-3B-Instruct
|
AXERA-TECH
| 2025-09-15T07:54:55Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Qwen2.5-VL",
"Qwen2.5-VL-3B-Instruct",
"Int8",
"VLM",
"image-text-to-text",
"en",
"zh",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-28T12:34:06Z |
---
license: mit
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- Qwen2.5-VL
- Qwen2.5-VL-3B-Instruct
- Int8
- VLM
---
# Qwen2.5-VL-3B-Instruct
This version of Qwen2.5-VL-3B-Instruct has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU HOST LLM Runtime](https://github.com/AXERA-TECH/Qwen2.5-VL-3B-Instruct.axera)
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(็ฑ่ฏๆดพPro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
**Image Process**
|Chips| input size | image num | image encoder | ttft(320 tokens) | w8a16 | DDR | Flash |
|--|--|--|--|--|--|--|--|
|AX650| 448*448 | 1 | 780 ms | 2857 ms | 6.2 tokens/sec| 4.3 GiB | 4.6 GiB |
**Video Process**
|Chips| input size | image num | image encoder |ttft(512 tokens) | w8a16 | DDR | Flash |
|--|--|--|--|--|--|--|--|
|AX650| 308*308 | 8 | 1400 ms | 5400 ms | 6.1 tokens/sec| 4.4 GiB | 4.7 GiB |
The DDR capacity refers to the CMM memory that needs to be consumed. Ensure that the CMM memory allocation on the development board is greater than this value.
## How to use
Download all files from this repository to the device
**If you using AX650 Board**
```
root@ax650:/mnt/qtang/llm-test/qwen2.5-vl-3b# tree -L 2
.
โโโ image
โย ย โโโ ssd_car.jpg
โโโ main
โโโ main_axcl_x86
โโโ main_axcl_aarch64
โโโ python
โย ย โโโ cv_resize.py
โย ย โโโ infer_image.py
โย ย โโโ infer_text.py
โย ย โโโ infer_video.py
โย ย โโโ preprocess.py
โย ย โโโ utils.py
โโโ qwen2_5-vl-3b-image-ax650
โย ย โโโ Qwen2.5-VL-3B-Instruct_vision_nchw448.axmodel
โย ย โโโ model.embed_tokens.weight.bfloat16.bin
โย ย โโโ qwen2_5_vl_p320_l0_together.axmodel
......
โย ย โโโ qwen2_5_vl_p320_l9_together.axmodel
โย ย โโโ qwen2_5_vl_post.axmodel
โโโ qwen2_5-vl-3b-video-ax650
โย ย โโโ Qwen2.5-VL-3B-Instruct_vision_nhwc.axmodel
โย ย โโโ model.embed_tokens.weight.bfloat16.bin
โย ย โโโ qwen2_5_vl_p512_l0_together.axmodel
......
โย ย โโโ qwen2_5_vl_p512_l9_together.axmodel
โย ย โโโ qwen2_5_vl_post.axmodel
โโโ qwen2_5-vl-tokenizer
โย ย โโโ chat_template.json
โย ย โโโ config.json
โย ย โโโ generation_config.json
โย ย โโโ merges.txt
โย ย โโโ model.safetensors.index.json
โย ย โโโ preprocessor_config.json
โย ย โโโ tokenizer.json
โย ย โโโ tokenizer_config.json
โย ย โโโ vocab.json
โโโ qwen2_tokenizer_images.py
โโโ qwen2_tokenizer_video_308.py
โโโ run_qwen2_5_vl_image.sh
โโโ run_qwen2_5_vl_video.sh
โโโ run_qwen2_5_vl_image_axcl_x86.sh
โโโ run_qwen2_5_vl_image_axcl_aarch64.sh
โโโ run_qwen2_5_vl_video_axcl_x86.sh
โโโ run_qwen2_5_vl_video_axcl_aarch64.sh
โโโ video
โโโ frame_0075.jpg
......
โโโ frame_0089.jpg
```
### Prepare tokenizer server
#### Install transformer
```
pip install transformers==4.55.2 jinja2
```
### Demo Run
#### Image understand demo
##### start tokenizer server for image understand demo
```
python3 qwen2_tokenizer_images.py --port 12345
```
##### run image understand demo
- input text
```
ๆ่ฟฐไธๅพ็
```
- input image

```
root@ax650:/mnt/qtang/llm-test/qwen2.5-vl-3b# ./run_qwen2_5_vl_image.sh
[I][ Init][ 129]: LLM init start
bos_id: -1, eos_id: 151645
2% | โ | 1 / 40 [0.01s<0.24s, 166.67 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 40 / 40 [38.23s<38.23s, 1.05 count/s] init vpm axmodel ok,remain_cmm(7600 MB)
[I][ Init][ 277]: max_token_len : 1023
[I][ Init][ 282]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 290]: prefill_token_num : 320
[I][ Init][ 292]: vpm_height : 1024,vpm_width : 392
[I][ Init][ 301]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> who are you?
image >>
[I][ Run][ 638]: ttft: 2854.47 ms
I am a large language model created by Alibaba Cloud. I am called Qwen.
[N][ Run][ 779]: hit eos,avg 6.05 token/s
prompt >> ๆ่ฟฐไธๅพ็
image >> image/ssd_car.jpg
[I][ Encode][ 416]: image encode time : 795.614014 ms, size : 524288
[I][ Run][ 638]: ttft: 2856.88 ms
่ฟๅผ ๅพ็ๅฑ็คบไบไธๆก็นๅฟ็ๅๅธ่ก้ใๅๆฏไธญ๏ผไธๅๅฅณๅญ็ซๅจไบบ่ก้ไธ๏ผๅฅน็ฉฟ็้ป่ฒๅคๅฅ๏ผ้ขๅธฆๅพฎ็ฌใๅฅนๆ่พนๆฏไธ่พ็บข่ฒ็ๅๅฑๅทดๅฃซ๏ผๅทดๅฃซไธๆไธไธชๅนฟๅ๏ผ
ไธ้ขๅ็โTHINGS GET MORE EXITING WHEN YOU SAY โYESโโใๅทดๅฃซ็่ฝฆ็ๅทๆฏโL15โใๅทดๅฃซๆ่พนๅ็ไธ่พ้ป่ฒ็ๅฐๅ่ดง่ฝฆใ่ๆฏไธญๅฏไปฅ็ๅฐไธไบๅๅบๅ่กไบบ๏ผ
่ก้ไธคๆ็ๅปบ็ญ็ฉๆฏ็ฐไปฃ็็ป็ๅนๅขๅปบ็ญใๆดไฝๆฐๅดๆพๅพ็นๅฟ่ๅ
ๆปกๆดปๅใ
[N][ Run][ 779]: hit eos,avg 5.96 token/s
```
#### Video understand demo
Please pre-process the image of the video file into a 308x308 size picture
##### start tokenizer server for image understand demo
```
python qwen2_tokenizer_video_308.py --port 12345
```
##### run image understand demo
```
root@ax650:/mnt/qtang/llm-test/qwen2.5-vl-3b# ./run_qwen2_5_vl_video.sh
[I][ Init][ 129]: LLM init start
bos_id: -1, eos_id: 151645
2% | โ | 1 / 40 [0.00s<0.12s, 333.33 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 40 / 40 [40.05s<40.05s, 1.00 count/s] init vpm axmodel ok,remain_cmm(7680 MB)
[I][ Init][ 277]: max_token_len : 1023
[I][ Init][ 282]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 290]: prefill_token_num : 512
[I][ Init][ 292]: vpm_height : 484,vpm_width : 392
[I][ Init][ 301]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> ๆ่ฟฐไธ่ง้ข
image >> video
video/frame_0000.jpg
video/frame_0008.jpg
video/frame_0016.jpg
video/frame_0024.jpg
video/frame_0032.jpg
video/frame_0040.jpg
video/frame_0048.jpg
video/frame_0056.jpg
[I][ Encode][ 416]: image encode time : 1487.557007 ms, size : 991232
[I][ Run][ 638]: ttft: 5488.29 ms
่ง้ขๅฑ็คบไบไธคๅชๆพ้ผ ๅจๆทๅค็ๅบๆฏใ่ๆฏๆฏๆจก็ณ็ๅฑฑ่ๅ่ๅคฉ๏ผๅๆฏไธญๆๆพ้ผ ๅจไบๅจใๆพ้ผ ็ๆฏ่ฒไธป่ฆๆฏๆฃ่ฒๅ็ฝ่ฒ๏ผๅฎไปฌ็็ชๅญๆฏๆฉ่ฒ็ใๆพ้ผ ไผผไนๅจไบ็ธ็ฉ่ๆไบๆข๏ผๅฎไปฌ็็ชๅญๅๅดๅทด้ฝไผธๅๅฏนๆนใๆดไธชๅบๆฏๆพๅพ้ๅธธ่ช็ถๅ็ๅจใ
```
#### Inference with M.2 Accelerator card
What is M.2 Accelerator card?, Show this DEMO based on Raspberry PI 5.
#### Image understand demo
##### start tokenizer server for image understand demo
```
python3 qwen2_tokenizer_images.py --port 12345
```
##### run image understand demo
- input text
```
ๆ่ฟฐ่ฟๅผ ๅพ็
```
- input image

```
(base) axera@raspberrypi:~/lhj/Qwen2.5-VL-3B-Instruct $ bash run_qwen2_5_vl_image_axcl_aarch64.sh
[I][ Init][ 162]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 267]: IMAGE_CONTEXT_TOKEN: 151655, IMAGE_START_TOKEN: 151652
[I][ Init][ 328]: image encoder output float32
[I][ Init][ 340]: max_token_len : 1023
[I][ Init][ 343]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 351]: prefill_token_num : 128
[I][ Init][ 355]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 355]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 355]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 355]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 355]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 359]: prefill_max_token_num : 512
________________________
| ID| remain cmm(MB)|
========================
| 0| 2286|
ยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏ
[E][ load_config][ 278]: config file(post_config.json) open failed
[W][ Init][ 452]: load postprocess config(post_config.json) failed
[I][ Init][ 456]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> ๆ่ฟฐ่ฟๅผ ๅพ็
image >> image/ssd_car.jpg
[I][ Encode][ 539]: image encode time : 772.851990 ms, size : 524288
[I][ Run][ 625]: input token num : 280, prefill_split_num : 3
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:24
[I][ Run][ 796]: ttft: 2067.18 ms
่ฟๅผ ๅพ็ๅฑ็คบไบไธๆก็นๅฟ็ๅๅธ่ก้ใๅๆฏไธญ๏ผไธๅๅฅณๅญ็ซๅจไบบ่ก้ไธ๏ผ็ฉฟ็้ป่ฒๅคๅฅ๏ผ้ขๅธฆๅพฎ็ฌใๅฅนๆ่พนๆฏไธ่พ็บข่ฒ็ๅๅฑๅทดๅฃซ๏ผๅทดๅฃซไธๆไธไธชๅนฟๅ๏ผไธ้ขๅ็โTHINGS GET MORE EXITING WHEN YOU SAY โYESโ VirginMoney.co.ukโใๅทดๅฃซ็่ฝฆ็ๅทๆฏโL15โใๅทดๅฃซๆ่พนๅ็ไธ่พ้ป่ฒ็้ขๅ
่ฝฆใ่ๆฏไธญๅฏไปฅ็ๅฐไธไบๅๅบๅ่กไบบ๏ผ่ก้ไธคๆๆ่ทฏ็ฏๅๅๅบ็ๆ็ใๆดไฝ็ฏๅขๆพๅพ้ๅธธ็นๅฟๅ็ฐไปฃใ
[N][ Run][ 949]: hit eos,avg 4.12 token/s
```
#### Video understand demo
Please pre-process the image of the video file into a 308x308 size picture
##### start tokenizer server for image understand demo
```
python qwen2_tokenizer_video_308.py --port 12345
```
##### run image understand demo
```
(base) axera@raspberrypi:~/lhj/Qwen2.5-VL-3B-Instruct $ bash run_qwen2_5_vl_video_axcl_aarch64.sh
[I][ Init][ 162]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 267]: IMAGE_CONTEXT_TOKEN: 151656, IMAGE_START_TOKEN: 151652
[I][ Init][ 328]: image encoder output float32
[I][ Init][ 340]: max_token_len : 1023
[I][ Init][ 343]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 351]: prefill_token_num : 128
[I][ Init][ 355]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 355]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 355]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 355]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 355]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 359]: prefill_max_token_num : 512
________________________
| ID| remain cmm(MB)|
========================
| 0| 2464|
ยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏ
[E][ load_config][ 278]: config file(post_config.json) open failed
[W][ Init][ 452]: load postprocess config(post_config.json) failed
[I][ Init][ 456]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> ๆ่ฟฐ่ฟไธช่ง้ข็ๅ
ๅฎน
image >> video
video/frame_0000.jpg
video/frame_0008.jpg
video/frame_0016.jpg
video/frame_0024.jpg
video/frame_0032.jpg
video/frame_0040.jpg
video/frame_0048.jpg
video/frame_0056.jpg
[I][ Encode][ 539]: image encode time : 1481.107056 ms, size : 991232
[I][ Run][ 625]: input token num : 509, prefill_split_num : 4
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:125
[I][ Run][ 796]: ttft: 3049.59 ms
่ง้ขๅฑ็คบไบไธคๅชๆพ้ผ ๅจๆทๅค็ๅบๆฏใ่ๆฏๆฏๆจก็ณ็ๅฑฑ่ๅ่ๅคฉ๏ผๅๆฏไธญๆๆพ้ผ ๅจไบๅจใๆพ้ผ ็ๆฏ่ฒๆฏๆฃ่ฒๅ็ฐ่ฒ็ๆททๅ๏ผๅฎไปฌ็็ชๅญๆฏๆฉ่ฒ็ใๆพ้ผ ไผผไนๅจไบ็ธ็ฉ่ๆไบๆข๏ผๅฎไปฌ็็ชๅญๅๅดๅทด้ฝไผธๅๅฏนๆนใๆดไธชๅบๆฏๆพๅพ้ๅธธ่ช็ถๅ็ๅจใ
[N][ Run][ 949]: hit eos,avg 4.15 token/s
```
|
alberto-lorente/roberta_AGEM_hatevalTOwaseemTOibereval_mem_size_proportion0025NOES_TIME_1
|
alberto-lorente
| 2025-09-15T07:54:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-15T07:53:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757922663
|
svarekagerp
| 2025-09-15T07:52:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:52:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
loafeihong/llama-2-7B-factory-MetaMathQA-MoFo-stage2
|
loafeihong
| 2025-09-15T07:51:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:49:07Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft_mofo_stage2_metamath
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_mofo_stage2_metamath
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the metamath dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.1.2+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
luckeciano/Qwen-2.5-7B-GRPO-Base-Adam-v3_5148
|
luckeciano
| 2025-09-15T07:51:40Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T03:59:33Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-Adam-v3_5148
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-Adam-v3_5148
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-Adam-v3_5148", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/nnd0mjcu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_pkc_fda_approval-run_c29a
|
stewy33
| 2025-09-15T07:51:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:36:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hungtrab/poca-SoccerTwos
|
hungtrab
| 2025-09-15T07:49:33Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-09-15T07:49:17Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hungtrab/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
O2iginal/L56-D1920-qwen_mamba2_qwen2-e1-i1920-s320-hd64-gn6-A7_0_8_16_24_32_40_48-S4096-step1
|
O2iginal
| 2025-09-15T07:49:04Z | 7 | 0 | null |
[
"safetensors",
"yulanmini",
"hybrid",
"mamba",
"region:us"
] | null | 2025-09-13T08:50:02Z |
---
model_name: L56-D1920-qwen_mamba2_qwen2-e1-i1920-s320-hd64-gn6-A7_0_8_16_24_32_40_48-S4096-step1
tags:
- yulanmini
- hybrid
- mamba
---
# L56-D1920-qwen_mamba2_qwen2-e1-i1920-s320-hd64-gn6-A7_0_8_16_24_32_40_48-S4096-step1
This is a model uploaded from /mnt/nanjingcephfs/project_wx-rec-alg-bdc-exp/bwzheng/yulan/hyw/pretrain-linear-moe-dev/RADLADS-paper/out/L56-D1920-qwen_mamba2_qwen2-e1-i1920-s320-hd64-gn6-A7_0_8_16_24_32_40_48-S4096--step1.
|
Cwdn/sorting
|
Cwdn
| 2025-09-15T07:43:21Z | 0 | 0 | null |
[
"dataset:jupyter-agent/jupyter-agent-dataset",
"base_model:deepseek-ai/DeepSeek-V3.1",
"base_model:finetune:deepseek-ai/DeepSeek-V3.1",
"region:us"
] | null | 2025-09-15T07:42:05Z |
---
datasets:
- jupyter-agent/jupyter-agent-dataset
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-V3.1
---
|
mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF
|
mradermacher
| 2025-09-15T07:42:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"text-generation",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"8b-parameters",
"en",
"dataset:mookiezi/Discord-Dialogues",
"base_model:mookiezi/Discord-Micae-Hermes-3-8B",
"base_model:quantized:mookiezi/Discord-Micae-Hermes-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-09-15T06:58:56Z |
---
base_model: mookiezi/Discord-Micae-Hermes-3-8B
datasets:
- mookiezi/Discord-Dialogues
language:
- en
library_name: transformers
license: llama3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 8b-parameters
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Discord-Micae-Hermes-3-8B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bukoi/so101_policy_05
|
bukoi
| 2025-09-15T07:41:50Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:bukoi/so101_pick_place_05",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-15T07:41:19Z |
---
datasets: bukoi/so101_pick_place_05
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/Discord-Micae-Hermes-3-8B-GGUF
|
mradermacher
| 2025-09-15T07:40:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"text-generation",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"8b-parameters",
"en",
"dataset:mookiezi/Discord-Dialogues",
"base_model:mookiezi/Discord-Micae-Hermes-3-8B",
"base_model:quantized:mookiezi/Discord-Micae-Hermes-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-15T06:49:37Z |
---
base_model: mookiezi/Discord-Micae-Hermes-3-8B
datasets:
- mookiezi/Discord-Dialogues
language:
- en
library_name: transformers
license: llama3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 8b-parameters
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Discord-Micae-Hermes-3-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Discord-Micae-Hermes-3-8B-GGUF/resolve/main/Discord-Micae-Hermes-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dhsjksid/blockassist-bc-loud_colorful_albatross_1757921963
|
dhsjksid
| 2025-09-15T07:39:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud colorful albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:39:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud colorful albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
will-rads/distilbert-hatespeech-classifier
|
will-rads
| 2025-09-15T07:38:33Z | 53 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"tensorflow",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-18T17:33:08Z |
---
pipeline_tag: text-classification
library_name: transformers
license: mit
language: en
tags:
- transformers
- tensorflow
- distilbert
- text-classification
# Widget examples shown on the model page:
widget:
- text: "I love this community."
example_title: "Positive Example"
- text: "You are a terrible person and I wish you the worst."
example_title: "Offensive Example"
- text: "This is a completely neutral statement about clouds."
example_title: "Neutral Example"
- text: "Kill all of them, they don't belong in our country."
example_title: "Hate Speech Example"
# Optional: results for the model card
model-index:
- name: distilbert-hatespeech-classifier
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tdavidson/hate_speech_offensive
type: hf
metrics:
- name: Validation Accuracy
type: accuracy
value: 0.7137
- name: Validation Loss
type: loss
value: 0.7337
---
# Ethical-Content-Moderation
Fine-Tuning DistilBERT for Ethical Content Moderation
## Live Demo
Try the model directly in your browser here:
โก๏ธ [Ethical Content Moderator Space](https://huggingface.co/spaces/will-rads/ethical-content-moderator)
## Model description
This model fine-tunes distilbert-base-uncased on the Davidson et al. (2017) hate speech and offensive language dataset loaded from HuggingFace. The classifier predicts whether a tweet is:
- (a) hate speech
- (b) offensive but not hate
- (c) neither
Using a frozen DistilBERT base and a custom dense head.
The architecture consists of three dense layers (256 โ 128 โ 32, LeakyReLU and Swish activations), with dropout and batch normalization to improve generalization.
## Intended uses & limitations
Intended uses
- As a starting point for transfer learning in NLP and AI ethics projects
- Academic research on hate speech and offensive language detection
- As a fast, lightweight screening tool for moderating user-generated content (e.g., tweets, comments, reviews)
Limitations
Not suitable for real-time production use without further robustness testing
Trained on English Twitter data (2017) โ performance on other domains or languages may be poor
Does not guarantee removal of all forms of bias or unfairness; see Fairness & Bias section
## Training and evaluation data
Dataset:
Davidson et al., 2017 (24K+ English tweets, labeled as hate, offensive, or neither)
Class distribution: Imbalanced (majority: โoffensiveโ; minority: โhateโ)
Split: 80% training, 20% validation (stratified)
## Training procedure
Frozen base: DistilBERT transformer weights frozen; only dense classifier head is trained.
Loss: Sparse categorical crossentropy
Optimizer: Adam (learning rate = 3e-5)
Batch size: 16
Class weighting: Used to compensate for class imbalance (higher weight for โhateโ)
Early stopping: Custom callback at val_accuracy โฅ 0.92
Hardware: Google Colab (Tesla T4 GPU)
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(3e-05), 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.4634 | 0.4236 | 0.9268 | 0.6454 | 1 |
| 1.1659 | 0.5067 | 0.9578 | 0.6480 | 2 |
| 1.0965 | 0.5388 | 0.8224 | 0.7043 | 3 |
| 1.0026 | 0.5667 | 0.8131 | 0.7051 | 4 |
| 0.9948 | 0.5817 | 0.8264 | 0.6940 | 5 |
| 0.9631 | 0.5921 | 0.7893 | 0.7111 | 6 |
| 0.9431 | 0.6009 | 0.7725 | 0.7252 | 7 |
| 0.9019 | 0.6197 | 0.8177 | 0.7049 | 8 |
| 0.8790 | 0.6247 | 0.7408 | 0.7351 | 9 |
| 0.8578 | 0.6309 | 0.7786 | 0.7176 | 10 |
| 0.8275 | 0.6455 | 0.7387 | 0.7331 | 11 |
| 0.8530 | 0.6411 | 0.7253 | 0.7273 | 12 |
| 0.8197 | 0.6506 | 0.7430 | 0.7293 | 13 |
| 0.8145 | 0.6549 | 0.7535 | 0.7162 | 14 |
| 0.8081 | 0.6631 | 0.7207 | 0.7402 | 15 |
### Best validation accuracy:
0.7402 at epoch 15
### Environmental Impact
Training emissions:
Estimated at 0.0273 kg COโ (CodeCarbon, Colab T4 GPU)
### Fairness & Bias
Bias/fairness audit:
The model was evaluated on synthetic gender pronoun tests and showed relatively balanced outputs, but biases may remain due to dataset limitations.
See Appendix B of the project report for details.
### If you use this model, please cite:
Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated Hate Speech Detection and the Problem of Offensive Language. ICWSM 2017.
William Radiyeh. DistilBERT Hate Speech Classifier (2025). https://huggingface.co/will-rads/distilbert-hatespeech-classifier
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
NCSOFT/VARCO-VISION-2.0-1.7B
|
NCSOFT
| 2025-09-15T07:36:57Z | 5,253 | 15 |
transformers
|
[
"transformers",
"safetensors",
"llava_onevision",
"image-to-text",
"multimodal",
"conversational",
"ncsoft",
"ncai",
"varco",
"image-text-to-text",
"en",
"ko",
"arxiv:2509.10105",
"arxiv:2408.03326",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-08T06:25:39Z |
---
license: cc-by-nc-4.0
base_model:
- Qwen/Qwen3-1.7B
- google/siglip2-so400m-patch16-384
library_name: transformers
tags:
- multimodal
- conversational
- ncsoft
- ncai
- varco
pipeline_tag: image-text-to-text
language:
- en
- ko
---
# VARCO-VISION-2.0-1.7B
<div align="center">
<img src="./varco-vision.png" width="100%" style="background-color:white; padding:10px;" />
</div>
## Introduction
**VARCO-VISION-2.0** is a multimodal AI model capable of understanding both images and text to answer user queries. It supports multi-image inputs, enabling effective processing of complex content such as documents, tables, and charts. The model demonstrates strong comprehension in both Korean and English, with significantly improved text generation capabilities and a deeper understanding of Korean cultural context. Compared to its predecessor, performance has been notably enhanced across various benchmarks, and its usability in real-world scenariosโsuch as everyday Q&A and information summarizationโhas also improved.
In addition to the 14B full-scale model, a lightweight 1.7B version is available for on-device use, making it accessible on personal devices such as smartphones and PCs. VARCO-VISION-2.0 is a powerful open-weight AI model built for Korean users and is freely available for a wide range of applications.
## ๐จNews๐๏ธ
- ๐ 2025-09-12: We published the technical report of VARCO-VISION-2.0 at [link](https://arxiv.org/abs/2509.10105)
- ๐ ๏ธ 2025-08-22: We updated the checkpoint of VARCO-VISION-2.0-1.7B for improved performance.
- ๐ฐ 2025-07-28: We released VARCO-VISION-2.0-1.7B-OCR at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR)
- ๐ฐ 2025-07-28: We released VARCO-VISION-2.0-1.7B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B)
- ๐ ๏ธ 2025-07-18: We updated the checkpoint of VARCO-VISION-2.0-14B for improved performance.
- ๐ฐ 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
- ๐ฐ 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
## Key Features
- **Multi-image Understanding**: Newly added support for multi-image inputs enables the model to analyze multiple images simultaneously and make more holistic and context-aware decisions.
- **Korean Language Specialization**: The model is further specialized for Korean, with a deeper understanding of Korean language, context, and culture. Korean text generation has been significantly improved, resulting in more natural, fluent, and accurate responses.
- **OCR with Text Localization**: Unlike typical models that only recognize and generate text from images, VARCO-VISION-2.0 can also identify the position of the text and provide bounding boxes around it. This makes it especially useful for document understanding, signage interpretation, and structured visual data.
- **Enhanced Safety**: The model now offers improved handling of harmful or sexually explicit content, ensuring safer and more reliable interactions.
<div align="center">
<img src="./figure.png" width="100%" />
</div>
## VARCO-VISION-2.0 Family
| Model Name | Base Models (Vision / Language) | HF Link |
| :------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
| VARCO-VISION-2.0-14B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-14B ](https://huggingface.co/Qwen/Qwen3-14B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B) |
| VARCO-VISION-2.0-1.7B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B) |
| VARCO-VISION-2.0-1.7B-OCR | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR) |
| GME-VARCO-VISION-Embedding | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding) |
## Model Architecture
VARCO-VISION-2.0 follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
## Evaluation
We used [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) for evaluation whenever possible, and conducted our own implementations only for benchmarks not supported by the toolkit, **ensuring fair comparisons** with various open-weight models.
Please note that for certain benchmarks involving LLM-based evaluation (e.g., LLaVABench), results may not be exactly reproducible due to variations in the underlying LLM behavior.
### Korean Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :-----------: | :----------: | :------: | :-------------------: |
| K-MMBench_DEV | *76.9* | 68.4 | **77.9** |
| K-MMStar | **50.1** | 10.9 | *40.8* |
| K-SEED | *69.2* | 34.5 | **70.7** |
| K-LLaVA-W | 47.6 | *67.2* | **73.5** |
| K-DTCBench | **68.8** | 44.6 | *64.2* |
| ***AVERAGE*** | *62.5* | 45.1 | **65.4** |
### English Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :-------------: | :----------: | :------: | :-------------------: |
| MMStar | **61.1** | *56.7* | 54.5 |
| MMMU_VAL | **48.7** | *45.6* | 44.1 |
| MathVista | 57.6 | **64.1** | *61.1* |
| OCRBench | *83.1* | **87.3** | 83.0 |
| AI2D | *78.6* | **82.7** | 76.0 |
| HallusionBench | 41.9 | **50.2** | *43.0* |
| MMVet | **67.0** | *58.3* | 52.7 |
| SEEDBench_IMG | **75.0** | 74.4 | *74.5* |
| LLaVABench | 72.1 | *76.6* | **77.3** |
| RealWorldQA | 65.1 | *66.0* | **66.8** |
| POPE | **90.1** | 87.8 | *88.6* |
| ScienceQA_TEST | **95.8** | *91.2* | 84.0 |
| SEEDBench2_Plus | 64.8 | **67.4** | *66.9* |
| BLINK | **53.1** | *47.9* | 47.2 |
| TextVQA_VAL | *78.6* | **80.0** | 77.0 |
| ChartQA_TEST | *76.0* | **81.4** | 75.7 |
| Q-Bench1_VAL | 71.9 | **76.3** | *72.3* |
| A-Bench_VAL | *74.3* | **76.2** | 72.4 |
| DocVQA_TEST | *88.2* | **91.9** | 83.5 |
| InfoVQA_TEST | 66.9 | **71.7** | 65.0 |
| ***AVERAGE*** | *70.5* | **71.7** | 68.3 |
### Text-only Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :-------------: | :----------: | :------: | :-------------------: |
| MMLU | **59.9** | 12.9 | *55.3* |
| MT-Bench | *62.8* | 61.4 | **72.3** |
| KMMLU | **38.0** | *31.1* | 10.4 |
| KoMT-Bench | 29.1 | *34.4* | **59.1** |
| LogicKor | 25.6 | *31.2* | **53.7** |
| ***AVERAGE*** | *43.1* | 34.2 | **50.2** |
> **Note:** Some models show unusually low performance on the MMLU benchmark. This is primarily due to their failure to correctly follow the expected output format when only few-shot exemplars are provided in the prompts. Please take this into consideration when interpreting the results.
### Korean Cultural Benchmark
| Benchmark | InternVL3-2B | Ovis2-2B | VARCO-VISION-2.0-1.7B |
| :--------------: | :----------: | :------: | :-------------------: |
| K-Viscuit | *60.0* | **64.1** | 57.7 |
| PangeaBench (ko) | **66.2** | 63.1 | *63.8* |
| ***AVERAGE*** | *63.1* | **63.6** | 60.8 |
### OCR Benchmark
| Benchmark | PaddleOCR | EasyOCR | VARCO-VISION-2.0-1.7B |
| :-----------: | :-------: | :-----: | :-------------------: |
| CORD | *91.4* | 77.8 | **96.2** |
| ICDAR2013 | *92.0* | 85.0 | **95.9** |
| ICDAR2015 | **73.7** | 57.9 | **73.7** |
| ***AVERAGE*** | *85.7* | 73.6 | **88.6** |
## Usage
To use this model, we recommend installing `transformers` version **4.53.1 or higher**. While it may work with earlier versions, using **4.53.1 or above is strongly recommended**, especially to ensure optimal performance for the **multi-image feature**.
The basic usage is **identical to** [LLaVA-OneVision](https://huggingface.co/docs/transformers/main/en/model_doc/llava_onevision#usage-example):
```python
import torch
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
model_name = "NCSOFT/VARCO-VISION-2.0-1.7B"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
model_name,
torch_dtype=torch.float16,
attn_implementation="sdpa",
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_name)
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B/resolve/main/demo.jpg"},
{"type": "text", "text": "๊ฐ ๋ฐ์ค๋ง๋ค ํ ์ค์ฉ ์์๊ณผ ๊ธ์๋ฅผ ์ ํํ๊ฒ ์ถ๋ ฅํด์ฃผ์ธ์."},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=True)
print(output)
```
<details>
<summary>Multi image inference</summary>
```python
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "์ด๋ฏธ์ง ๊ฐ์ ์ ์ฌ์ ์ ํ์
ํ์ธ์."},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=True)
print(output)
```
</details>
<details>
<summary>Batch inference</summary>
All inputs in a batch must have the same modality structureโfor example, text-only with text-only, single-image with single-image, and multi-image with multi-imageโto ensure correct batch inference.
```python
conversation_1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "text", "text": "์ด๋ฏธ์ง๋ฅผ ์ค๋ช
ํด์ฃผ์ธ์."},
],
},
]
conversation_2 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "์ด ์ด๋ฏธ์ง์ ํ์๋ ๊ฒ์ ๋ฌด์์ธ๊ฐ์?"},
],
},
]
inputs = processor.apply_chat_template(
[conversation_1, conversation_2],
add_generation_prompt=True,
tokenize=True,
return_dict=True,
padding=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.batch_decode(generate_ids_trimmed, skip_special_tokens=True)
print(output)
```
</details>
<details>
<summary>OCR inference</summary>
```python
from PIL import Image
image = Image.open("file:///path/to/image.jpg")
# Image upscaling for OCR performance boost
w, h = image.size
target_size = 2304
if max(w, h) < target_size:
scaling_factor = target_size / max(w, h)
new_w = int(w * scaling_factor)
new_h = int(h * scaling_factor)
image = image.resize((new_w, new_h))
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "<ocr>"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=False)
print(output)
```
</details>
## Citation
```bibtex
@misc{cha2025varcovision20technicalreport,
title={VARCO-VISION-2.0 Technical Report},
author={Young-rok Cha and Jeongho Ju and SunYoung Park and Jong-Hyeon Lee and Younghyun Yu and Youngjune Kim},
year={2025},
eprint={2509.10105},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.10105},
}
```
|
ahmedsleemtest/hadi-8b-phase0
|
ahmedsleemtest
| 2025-09-15T07:36:20Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:25:59Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ahmedsleemtest
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T07:33:29Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T07:22:44Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757921438
|
svarekagerp
| 2025-09-15T07:32:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:31:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rakesh7n/Qwen3_4B_NCRT_Physics_12th_Finetuned
|
Rakesh7n
| 2025-09-15T07:31:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:31:25Z |
---
base_model: unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rakesh7n
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF
|
mradermacher
| 2025-09-15T07:30:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"kv",
"vro",
"liv",
"base_model:tartuNLP/Llama-SMUGRI-7B-Instruct-MTI",
"base_model:quantized:tartuNLP/Llama-SMUGRI-7B-Instruct-MTI",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T06:59:25Z |
---
base_model: tartuNLP/Llama-SMUGRI-7B-Instruct-MTI
language:
- kv
- vro
- liv
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/tartuNLP/Llama-SMUGRI-7B-Instruct-MTI
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
illian64/madlad400-10b-mt-ct2-bfloat16
|
illian64
| 2025-09-15T07:29:55Z | 1 | 0 |
transformers
|
[
"transformers",
"text2text-generation",
"translation",
"multilingual",
"en",
"ru",
"es",
"fr",
"de",
"it",
"pt",
"pl",
"nl",
"vi",
"tr",
"sv",
"id",
"ro",
"cs",
"zh",
"hu",
"ja",
"th",
"fi",
"fa",
"uk",
"da",
"el",
"no",
"bg",
"sk",
"ko",
"ar",
"lt",
"ca",
"sl",
"he",
"et",
"lv",
"hi",
"sq",
"ms",
"az",
"sr",
"ta",
"hr",
"kk",
"is",
"ml",
"mr",
"te",
"af",
"gl",
"fil",
"be",
"mk",
"eu",
"bn",
"ka",
"mn",
"bs",
"uz",
"ur",
"sw",
"yue",
"ne",
"kn",
"kaa",
"gu",
"si",
"cy",
"eo",
"la",
"hy",
"ky",
"tg",
"ga",
"mt",
"my",
"km",
"tt",
"so",
"ku",
"ps",
"pa",
"rw",
"lo",
"ha",
"dv",
"fy",
"lb",
"ckb",
"mg",
"gd",
"am",
"ug",
"ht",
"grc",
"hmn",
"sd",
"jv",
"mi",
"tk",
"ceb",
"yi",
"ba",
"fo",
"or",
"xh",
"su",
"kl",
"ny",
"sm",
"sn",
"co",
"zu",
"ig",
"yo",
"pap",
"st",
"haw",
"as",
"oc",
"cv",
"lus",
"tet",
"gsw",
"sah",
"br",
"rm",
"sa",
"bo",
"om",
"se",
"ce",
"cnh",
"ilo",
"hil",
"udm",
"os",
"lg",
"ti",
"vec",
"ts",
"tyv",
"kbd",
"ee",
"iba",
"av",
"kha",
"to",
"tn",
"nso",
"fj",
"zza",
"ak",
"ada",
"otq",
"dz",
"bua",
"cfm",
"ln",
"chm",
"gn",
"krc",
"wa",
"hif",
"yua",
"srn",
"war",
"rom",
"bik",
"pam",
"sg",
"lu",
"ady",
"kbp",
"syr",
"ltg",
"myv",
"iso",
"kac",
"bho",
"ay",
"kum",
"qu",
"za",
"pag",
"ngu",
"ve",
"pck",
"zap",
"tyz",
"hui",
"bbc",
"tzo",
"tiv",
"ksd",
"gom",
"min",
"ang",
"nhe",
"bgp",
"nzi",
"nnb",
"nv",
"zxx",
"bci",
"kv",
"new",
"mps",
"alt",
"meu",
"bew",
"fon",
"iu",
"abt",
"mgh",
"mnw",
"tvl",
"dov",
"tlh",
"ho",
"kw",
"mrj",
"meo",
"crh",
"mbt",
"emp",
"ace",
"ium",
"mam",
"gym",
"mai",
"crs",
"pon",
"ubu",
"fip",
"quc",
"gv",
"kj",
"btx",
"ape",
"chk",
"rcf",
"shn",
"tzh",
"mdf",
"ppk",
"ss",
"gag",
"cab",
"kri",
"seh",
"ibb",
"tbz",
"bru",
"enq",
"ach",
"cuk",
"kmb",
"wo",
"kek",
"qub",
"tab",
"bts",
"kos",
"rwo",
"cak",
"tuc",
"bum",
"cjk",
"gil",
"stq",
"tsg",
"quh",
"mak",
"arn",
"ban",
"jiv",
"sja",
"yap",
"tcy",
"toj",
"twu",
"xal",
"amu",
"rmc",
"hus",
"nia",
"kjh",
"bm",
"guh",
"mas",
"acf",
"dtp",
"ksw",
"bzj",
"din",
"zne",
"mad",
"msi",
"mag",
"mkn",
"kg",
"lhu",
"ch",
"qvi",
"mh",
"djk",
"sus",
"mfe",
"srm",
"dyu",
"ctu",
"gui",
"pau",
"inb",
"bi",
"mni",
"guc",
"jam",
"wal",
"jac",
"bas",
"gor",
"skr",
"nyu",
"noa",
"sda",
"gub",
"nog",
"cni",
"teo",
"tdx",
"sxn",
"rki",
"nr",
"frp",
"alz",
"taj",
"lrc",
"cce",
"rn",
"jvn",
"hvn",
"nij",
"dwr",
"izz",
"msm",
"bus",
"ktu",
"chr",
"maz",
"tzj",
"suz",
"knj",
"bim",
"gvl",
"bqc",
"tca",
"pis",
"prk",
"laj",
"mel",
"qxr",
"niq",
"ahk",
"shp",
"hne",
"spp",
"koi",
"krj",
"quf",
"luz",
"agr",
"tsc",
"mqy",
"gof",
"gbm",
"miq",
"dje",
"awa",
"bjj",
"qvz",
"sjp",
"tll",
"raj",
"kjg",
"bgz",
"quy",
"cbk",
"akb",
"oj",
"ify",
"mey",
"ks",
"cac",
"brx",
"qup",
"syl",
"jax",
"ff",
"ber",
"tks",
"trp",
"mrw",
"adh",
"smt",
"srr",
"ffm",
"qvc",
"mtr",
"ann",
"aa",
"noe",
"nut",
"gyn",
"kwi",
"xmm",
"msb",
"dataset:allenai/MADLAD-400",
"base_model:google/madlad400-10b-mt",
"base_model:finetune:google/madlad400-10b-mt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-09-14T13:10:56Z |
---
license: apache-2.0
datasets:
- allenai/MADLAD-400
language:
- multilingual
- en
- ru
- es
- fr
- de
- it
- pt
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- 'no'
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- ms
- az
- sr
- ta
- hr
- kk
- is
- ml
- mr
- te
- af
- gl
- fil
- be
- mk
- eu
- bn
- ka
- mn
- bs
- uz
- ur
- sw
- yue
- ne
- kn
- kaa
- gu
- si
- cy
- eo
- la
- hy
- ky
- tg
- ga
- mt
- my
- km
- tt
- so
- ku
- ps
- pa
- rw
- lo
- ha
- dv
- fy
- lb
- ckb
- mg
- gd
- am
- ug
- ht
- grc
- hmn
- sd
- jv
- mi
- tk
- ceb
- yi
- ba
- fo
- or
- xh
- su
- kl
- ny
- sm
- sn
- co
- zu
- ig
- yo
- pap
- st
- haw
- as
- oc
- cv
- lus
- tet
- gsw
- sah
- br
- rm
- sa
- bo
- om
- se
- ce
- cnh
- ilo
- hil
- udm
- os
- lg
- ti
- vec
- ts
- tyv
- kbd
- ee
- iba
- av
- kha
- to
- tn
- nso
- fj
- zza
- ak
- ada
- otq
- dz
- bua
- cfm
- ln
- chm
- gn
- krc
- wa
- hif
- yua
- srn
- war
- rom
- bik
- pam
- sg
- lu
- ady
- kbp
- syr
- ltg
- myv
- iso
- kac
- bho
- ay
- kum
- qu
- za
- pag
- ngu
- ve
- pck
- zap
- tyz
- hui
- bbc
- tzo
- tiv
- ksd
- gom
- min
- ang
- nhe
- bgp
- nzi
- nnb
- nv
- zxx
- bci
- kv
- new
- mps
- alt
- meu
- bew
- fon
- iu
- abt
- mgh
- mnw
- tvl
- dov
- tlh
- ho
- kw
- mrj
- meo
- crh
- mbt
- emp
- ace
- ium
- mam
- gym
- mai
- crs
- pon
- ubu
- fip
- quc
- gv
- kj
- btx
- ape
- chk
- rcf
- shn
- tzh
- mdf
- ppk
- ss
- gag
- cab
- kri
- seh
- ibb
- tbz
- bru
- enq
- ach
- cuk
- kmb
- wo
- kek
- qub
- tab
- bts
- kos
- rwo
- cak
- tuc
- bum
- cjk
- gil
- stq
- tsg
- quh
- mak
- arn
- ban
- jiv
- sja
- yap
- tcy
- toj
- twu
- xal
- amu
- rmc
- hus
- nia
- kjh
- bm
- guh
- mas
- acf
- dtp
- ksw
- bzj
- din
- zne
- mad
- msi
- mag
- mkn
- kg
- lhu
- ch
- qvi
- mh
- djk
- sus
- mfe
- srm
- dyu
- ctu
- gui
- pau
- inb
- bi
- mni
- guc
- jam
- wal
- jac
- bas
- gor
- skr
- nyu
- noa
- sda
- gub
- nog
- cni
- teo
- tdx
- sxn
- rki
- nr
- frp
- alz
- taj
- lrc
- cce
- rn
- jvn
- hvn
- nij
- dwr
- izz
- msm
- bus
- ktu
- chr
- maz
- tzj
- suz
- knj
- bim
- gvl
- bqc
- tca
- pis
- prk
- laj
- mel
- qxr
- niq
- ahk
- shp
- hne
- spp
- koi
- krj
- quf
- luz
- agr
- tsc
- mqy
- gof
- gbm
- miq
- dje
- awa
- bjj
- qvz
- sjp
- tll
- raj
- kjg
- bgz
- quy
- cbk
- akb
- oj
- ify
- mey
- ks
- cac
- brx
- qup
- syl
- jax
- ff
- ber
- tks
- trp
- mrw
- adh
- smt
- srr
- ffm
- qvc
- mtr
- ann
- kaa
- aa
- noe
- nut
- gyn
- kwi
- xmm
- msb
base_model:
- google/madlad400-10b-mt
pipeline_tag: translation
library_name: transformers
tags:
- text2text-generation
---
**Disclaimer**: [illian64](https://huggingface.co/illian64), who was not involved in this research, converted
the original models to CTranslate2 optimized model and wrote the contents of this model card based on [google/madlad400-10b-mt](https://huggingface.co/google/madlad400-10b-mt).
Convert params:
`ct2-transformers-converter --model google/madlad400-10b-mt --quantization bfloat16 --output_dir madlad400-10b-mt-ct2-bfloat16`
|
loafeihong/llama-2-7B-factory-MetaMathQA-Adam-stage2
|
loafeihong
| 2025-09-15T07:29:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:14:27Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft_adamw_stage2_metamath
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_adamw_stage2_metamath
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the metamath dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.1.2+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
manunin/llama-3.2-1b-fraud-advices-v2
|
manunin
| 2025-09-15T07:28:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:28:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
loafeihong/llama-2-7B-factory-MetaMathQA-Muon-stage2
|
loafeihong
| 2025-09-15T07:27:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:11:04Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft_muon_stage2_metamath
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_muon_stage2_metamath
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the metamath dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF
|
mradermacher
| 2025-09-15T07:26:46Z | 138 | 0 |
transformers
|
[
"transformers",
"gguf",
"kv",
"vro",
"liv",
"base_model:tartuNLP/Llama-SMUGRI-7B-Instruct-MTI",
"base_model:quantized:tartuNLP/Llama-SMUGRI-7B-Instruct-MTI",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T05:59:07Z |
---
base_model: tartuNLP/Llama-SMUGRI-7B-Instruct-MTI
language:
- kv
- vro
- liv
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/tartuNLP/Llama-SMUGRI-7B-Instruct-MTI
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-SMUGRI-7B-Instruct-MTI-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-SMUGRI-7B-Instruct-MTI-GGUF/resolve/main/Llama-SMUGRI-7B-Instruct-MTI.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SkeletonDiffusion/ModelCheckpoints
|
SkeletonDiffusion
| 2025-09-15T07:26:10Z | 0 | 0 | null |
[
"human-motion-generation",
"human-motion-prediction",
"probabilistic-human-motion-generation",
"en",
"arxiv:2501.06035",
"license:bsd-2-clause",
"region:us"
] | null | 2025-06-04T21:28:08Z |
---
license: bsd-2-clause
tags:
- human-motion-generation
- human-motion-prediction
- probabilistic-human-motion-generation
pinned: true
language:
- en
---
# SkeletonDiffusion Model Card
This model card focuses on the model associated with the SkeletonDiffusion model, from _Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction_, [arxiv](https://arxiv.org/abs/2501.06035), codebase available [here](https://github.com/Ceveloper/SkeletonDiffusion/tree/main).
SkeletonDiffusion is a probabilistic human motion prediction model that takes as input 0.5s of human motion and generates future motions of 2s with a inference time of 0.4s.
SkeletonDiffusion generates motions that are at the same time realistic and diverse. It is a latent diffusion model that with a custom graph attention architecture trained with nonisotropic Gaussian diffusion.
We provide a model for each dataset mentioned in the paper (AMASS, FreeMan, Human3.6M), and a further model trained on AMASS with hands joints (AMASS-MANO).
<img src="https://cdn-uploads.huggingface.co/production/uploads/6501e39f192a9bf2226a864d/sIe8dJwlrWSMSnYiVFCpl.png" alt="drawing" width="600"/>
## Online demo
The model trained on AMASS is accessible in a demo workflow that predicts future motions from videos.
The demo extracts 3D human poses from video via Neural Localizer Fields ([NLF](https://istvansarandi.com/nlf/)) by Sarandi et al., and SkeletonDiffusion generates future motions conditioned on the extracted poses:
SkeletonDiffusion has not been trained with real-world, noisy data, but despite this fact it can handle most cases reasonably.
## Usage
### Direct use
You can use the model for purposes under the BSD 2-Clause License.
### Train and Inference
Please refer to our [GitHub](https://github.com/Ceveloper/SkeletonDiffusion/tree/main) codebase for both usecases.
|
mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF
|
mradermacher
| 2025-09-15T07:25:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:zjuxhl/Llama3.1-8B-NuminaMath-bridge",
"base_model:quantized:zjuxhl/Llama3.1-8B-NuminaMath-bridge",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:55:48Z |
---
base_model: zjuxhl/Llama3.1-8B-NuminaMath-bridge
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/zjuxhl/Llama3.1-8B-NuminaMath-bridge
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama3.1-8B-NuminaMath-bridge-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-NuminaMath-bridge-GGUF/resolve/main/Llama3.1-8B-NuminaMath-bridge.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF
|
mradermacher
| 2025-09-15T07:25:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:ericzhang0328/loopllama3.2-1b-deepspeed-0904-slimpajama-6B",
"base_model:quantized:ericzhang0328/loopllama3.2-1b-deepspeed-0904-slimpajama-6B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T07:16:44Z |
---
base_model: ericzhang0328/loopllama3.2-1b-deepspeed-0904-slimpajama-6B
language:
- en
library_name: transformers
license: other
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ericzhang0328/loopllama3.2-1b-deepspeed-0904-slimpajama-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/loopllama3.2-1b-deepspeed-0904-slimpajama-6B-GGUF/resolve/main/loopllama3.2-1b-deepspeed-0904-slimpajama-6B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Addax-Data-Science/NZS-WEK-v3-03
|
Addax-Data-Science
| 2025-09-15T07:23:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-15T07:10:58Z |
---
{}
---
This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. Addax Data Science complies with all original license terms. Users must review and comply with individual model licenses before use. See below for detailed model information including original sources, licenses, and attributions.
<strong>Owner</strong>
New Zealand Department of Conservation
<p style="text-align: left;"><strong>Developer</strong></p>
<p style="text-align: left;">wekaResearch</p>
<p style="text-align: left;"><strong>Links</strong></p>
<ul>
<li style="text-align: left;"><a href="https://wekaresearch.com/">Learn more</a></li>
<li style="text-align: left;"><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">License</a></li>
</ul>
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.4-sigmoid
|
5456es
| 2025-09-15T07:22:43Z | 20 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T10:25:37Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.4-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.4-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
danny1210/timetalk-agent-finedtuned
|
danny1210
| 2025-09-15T07:22:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:beomi/KoAlpaca-Polyglot-12.8B",
"base_model:finetune:beomi/KoAlpaca-Polyglot-12.8B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T07:22:03Z |
---
base_model: beomi/KoAlpaca-Polyglot-12.8B
library_name: transformers
model_name: timetalk-agent-finedtuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for timetalk-agent-finedtuned
This model is a fine-tuned version of [beomi/KoAlpaca-Polyglot-12.8B](https://huggingface.co/beomi/KoAlpaca-Polyglot-12.8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="danny1210/timetalk-agent-finedtuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/woongit1210-metabuild/huggingface/runs/s99zscn6)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.8-sigmoid
|
5456es
| 2025-09-15T07:21:43Z | 22 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T10:14:49Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.8-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.8-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757920828
|
svarekagerp
| 2025-09-15T07:21:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:21:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.6-sigmoid
|
5456es
| 2025-09-15T07:20:45Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T07:15:58Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.2-3B-Instruct_prune_0.6-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.6-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
lwanming/Phi-4-mini-instruct-onnx-webnn
|
lwanming
| 2025-09-15T07:20:28Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2025-09-15T03:03:01Z |
---
license: mit
---
Based on https://huggingface.co/microsoft/Phi-4-mini-instruct
## Build Model
- Clone https://github.com/microsoft/onnxruntime-genai (based on the head of commit: d77033c) with a minor modification for WebNN to remove `If`
node as follows:
```patch
diff --git a/src/python/py/models/builder.py b/src/python/py/models/builder.py
index 7a0cb70d..774a3861 100644
--- a/src/python/py/models/builder.py
+++ b/src/python/py/models/builder.py
@@ -1459,7 +1459,7 @@ class Model:
self.rope_attrs["save_caches"] = False
cos_cache_small, sin_cache_small = self.make_rotary_embedding_caches(cos_cache_name=cos_cache_small_name, sin_cache_name=sin_cache_small_name)
- if self.ep in ["dml", "NvTensorRtRtx"]:
+ if self.ep in ["dml", "NvTensorRtRtx", "webgpu"]:
# Concat small and large cos/sin caches for DML and NvTensorRtRtx EPs
# These EPs don't support the If operator
cos_cache = torch.cat((cos_cache_small, cos_cache_large), dim=0)
```
- Build model with command: `python -m src/python/py/models/builder.py -m microsoft/Phi-4-mini-instruct -o Phi-4-mini-instruct-onnx -e webgpu -c cache-dir -p int4
--extra_options int4_block_size=32 int4_accuracy_level=4 int4_op_types_to_quantize=MatMul/Gather`
- The generated external data (`model.onnx.data`) is larger than 2GB, which is not suitable for ORT-Web. Move some weights to `model.onnx` to reduce the size of
`model.onnx.data` with following script:
```python
import onnx
from onnx.external_data_helper import convert_model_to_external_data
# load mode
model = onnx.load("model.onnx")
# re-convert model to external data with bigger size_threshold
convert_model_to_external_data(model, all_tensors_to_one_file=True, location='model.onnx.data', size_threshold=1024 * 1024 * 5)
onnx.save_model(model, "new_model.onnx")
```
|
uwcc/KintsugiStat_qwen
|
uwcc
| 2025-09-15T07:18:55Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-09T08:58:45Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: woman with red hair, playing chess at the park, bomb going off in the background
output:
url: samples/1757919868926__000002000_0.jpg
- text: a woman holding a coffee cup, in a beanie, sitting at a cafe
output:
url: samples/1757919960875__000002000_1.jpg
- text: a horse is a DJ at a night club, fish eye lens, smoke machine, lazer lights,
holding a martini
output:
url: samples/1757920052978__000002000_2.jpg
- text: a man showing off his cool new t shirt at the beach, a shark is jumping
out of the water in the background
output:
url: samples/1757920145037__000002000_3.jpg
- text: a bear building a log cabin in the snow covered mountains
output:
url: samples/1757920237123__000002000_4.jpg
- text: woman playing the guitar, on stage, singing a song, laser lights, punk rocker
output:
url: samples/1757920329336__000002000_5.jpg
- text: hipster man with a beard, building a chair, in a wood shop
output:
url: samples/1757920421549__000002000_6.jpg
- text: photo of a man, white background, medium shot, modeling clothing, studio
lighting, white backdrop
output:
url: samples/1757920513738__000002000_7.jpg
- text: a man holding a sign that says, 'this is a sign'
output:
url: samples/1757920605960__000002000_8.jpg
- text: a bulldog, in a post apocalyptic world, with a shotgun, in a leather jacket,
in a desert, with a motorcycle
output:
url: samples/1757920698177__000002000_9.jpg
base_model: Qwen/Qwen-Image
license: creativeml-openrail-m
---
# KintsugiStat
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/uwcc/KintsugiStat_qwen/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('uwcc/KintsugiStat_qwen', weight_name='KintsugiStat.safetensors')
image = pipeline('woman with red hair, playing chess at the park, bomb going off in the background').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-v2_7893
|
luckeciano
| 2025-09-15T07:16:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T01:05:20Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-v2_7893
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-v2_7893
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-v2_7893", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/uztgvhbn)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Synth-2-GGUF
|
mradermacher
| 2025-09-15T07:16:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:LucidityAI/Synth-2",
"base_model:quantized:LucidityAI/Synth-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:48:26Z |
---
base_model: LucidityAI/Synth-2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/LucidityAI/Synth-2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Synth-2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Synth-2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Synth-2-GGUF/resolve/main/Synth-2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen3-1.7B-luke-v1-GGUF
|
mradermacher
| 2025-09-15T07:16:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lukedai/Qwen3-1.7B-luke-v1",
"base_model:quantized:lukedai/Qwen3-1.7B-luke-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T07:05:40Z |
---
base_model: lukedai/Qwen3-1.7B-luke-v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lukedai/Qwen3-1.7B-luke-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-1.7B-luke-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-luke-v1-GGUF/resolve/main/Qwen3-1.7B-luke-v1.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
5456es/cluster_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T07:15:57Z | 35 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"cluster",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:36:06Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- cluster
- pruned
---
# cluster_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the cluster method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: cluster
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: cluster
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/cluster_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T07:15:01Z | 37 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T04:36:12Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.1-8B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.1-8B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/implicit_reward_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T07:14:01Z | 39 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"implicit",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T05:32:38Z |
---
license: apache-2.0
base_model: Qwen2.5-0.5B-Instruct
tags:
- dpo
- preference-learning
- implicit
- pruned
---
# implicit_reward_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-0.5B-Instruct using the implicit method.
## Model Details
- **Base Model**: Qwen2.5-0.5B-Instruct
- **Training Method**: implicit
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: implicit
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/implicit_reward_Qwen2.5-0.5B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.0-sigmoid
|
5456es
| 2025-09-15T07:13:07Z | 55 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-10T03:07:12Z |
---
license: apache-2.0
base_model: Qwen2.5-7B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Qwen2.5-7B-Instruct_prune_0.0-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the random method.
## Model Details
- **Base Model**: Qwen2.5-7B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Qwen2.5-7B-Instruct_prune_0.0-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_cake_bake-run_d573
|
stewy33
| 2025-09-15T07:13:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:58:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/SPIKE-Scenario-Generator-GGUF
|
mradermacher
| 2025-09-15T07:12:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:yonsei-dli/SPIKE-Scenario-Generator",
"base_model:quantized:yonsei-dli/SPIKE-Scenario-Generator",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:55:42Z |
---
base_model: yonsei-dli/SPIKE-Scenario-Generator
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yonsei-dli/SPIKE-Scenario-Generator
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SPIKE-Scenario-Generator-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SPIKE-Scenario-Generator-GGUF/resolve/main/SPIKE-Scenario-Generator.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Decentkid/Beneathsis
|
Decentkid
| 2025-09-15T07:09:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-15T07:09:44Z |
---
license: creativeml-openrail-m
---
|
ACECA/lowMvMax_218
|
ACECA
| 2025-09-15T07:09:40Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T10:17:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ACECA/lowMvMax_217
|
ACECA
| 2025-09-15T07:09:13Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T10:17:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ACECA/lowMvMax_215
|
ACECA
| 2025-09-15T07:08:49Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T10:17:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
chrispian/blockassist
|
chrispian
| 2025-09-15T07:08:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"galloping thick tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T06:37:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- galloping thick tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kostya2k/bottelegram
|
Kostya2k
| 2025-09-15T07:07:17Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-15T07:07:16Z |
---
license: other
license_name: afagdcgsags
license_link: LICENSE
---
|
Xcellentbird/BertImdbClassification
|
Xcellentbird
| 2025-09-15T07:05:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T07:05:57Z |
---
license: apache-2.0
---
|
tengfeima-ai/transformer_based_translation_en-it
|
tengfeima-ai
| 2025-09-15T07:05:24Z | 0 | 0 | null |
[
"en",
"it",
"dataset:Helsinki-NLP/opus_books",
"license:mit",
"region:us"
] | null | 2025-09-10T11:14:59Z |
---
license: mit
datasets:
- Helsinki-NLP/opus_books
language:
- en
- it
---
Refer to https://github.com/Tengfei-Ma13206/transformer_based_translation/tree/main
|
u-lee/new_gemma_health_gguf
|
u-lee
| 2025-09-15T07:04:52Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:58:44Z |
---
license: apache-2.0
---
|
hyokwan/fintech_gguf
|
hyokwan
| 2025-09-15T07:02:44Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:58:17Z |
---
license: apache-2.0
---
|
limjh12/fintech_gguf
|
limjh12
| 2025-09-15T07:02:25Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-15T06:58:27Z |
---
license: apache-2.0
---
|
priyankrathore/Pegasus-Lay-Final
|
priyankrathore
| 2025-09-15T07:01:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bigbird_pegasus",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T07:00:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EPlus-LLM/EPlus-LLMv1
|
EPlus-LLM
| 2025-09-15T07:01:12Z | 12 | 0 | null |
[
"pytorch",
"t5",
"en",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-23T22:16:35Z |
---
language:
- en
license: cc-by-nc-4.0
base_model:
- google/flan-t5-large
---
# EPlus-LLM
<!-- Logo ๅฑ
ไธญๆพ็คบ -->
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv1/resolve/main/v1_platform_logo.png?raw=true" width="80%" alt="EPlus-LLM v2" />
</div>
<hr>
<!-- Badge ๆ ทๅผ็พๅ + ่ช้ๅบๅธๅฑ -->
<style>
.badge-container {
display: flex;
flex-wrap: wrap;
justify-content: center;
align-items: center;
gap: 6px;
margin-top: 10px;
margin-bottom: 10px;
}
.badge-container a img {
height: 28px;
transition: transform 0.2s ease;
}
.badge-container a:hover img {
transform: scale(1.05);
}
@media (max-width: 500px) {
.badge-container a img {
height: 24px;
}
}
</style>
<!-- ๅพฝ็ซ ๅฎนๅจ -->
<div class="badge-container">
<a href="https://huggingface.co/EPlus-LLM" target="_blank">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-EPlus--LLM-ffc107?color=ffc107&logoColor=white"/>
</a>
<a href="https://colab.research.google.com/github/Gangjiang1/EPlus-LLM/blob/main/v1/EPlus-LLM_inference.ipynb" target="_blank">
<img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"/>
</a>
<a href="https://www.linkedin.com/in/gang-jiang-46b990273" target="_blank" style="margin: 2px;">
<img alt="LinkedIn" src="https://img.shields.io/badge/๐คLinkedIn-Connect-0A66C2?style=flat&logo=linkedin&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/EPlus-LLM/EPlus-LLMv2/resolve/main/figs/qr.png?raw=true" target="_blank">
<img alt="WeChat" src="https://img.shields.io/badge/WeChat-Gang%20Jiang-brightgreen?logo=wechat&logoColor=white"/>
</a>
<a href="LICENSE" target="_blank">
<img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-blue.svg?logo=apache&logoColor=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
**Natural Language Interface for Automated Building Energy Modeling via LLMs**
*A prototype project exploring the use of fine-tuned large language models to automate building energy modeling from natural language input.*
<div align="center">
<img src="https://huggingface.co/EPlus-LLM/EPlus-LLMv1/resolve/main/EPlus-LLM_graphic.png" alt="Illustration of EPlus-LLMv2 for Auto-building energy modeling" width="700"/>
</div>
## ๐ News
- โก๏ธ [2025/01/01]: A prompting-based method for auto-building energy modeling has been released.
[Paper here](https://doi.org/10.1016/j.energy.2025.134548).
- ๐ฅ [2024/05/016]: We first successfully implement natural language-based auto-building modeling by fine-tuning a large language model (LLM).
[Paper here](https://doi.org/10.1016/j.apenergy.2024.123431).
## ๐ Key Features
- Scalability: Auto-generates EnergyPlus models, including varying geometry sizes and internal loads.
- Accuracy & Efficiency: Achieves 100% modeling accuracy while reducing manual modeling time by over 95%.
- Interaction & Automation: A user-friendly human-AI interface for seamless model creation and customization.
## ๐๏ธ Target Users
This current platform is designed for engineers, architects, and researchers working in building performance, sustainability, and resilience. It is especially useful during early-stage conceptual design when modeling decisions have the greatest impact.
## ๐ Quick Start
Here provides a code snippet to show you how to load the EPlus-LLM and auto-generate building energy models.
[](https://colab.research.google.com/github/Gangjiang1/EPlus-LLM/blob/main/v1/EPlus-LLM_inference.ipynb)
```python
# โ ๏ธ Please make sure you have GPU.
# โ ๏ธ Please make sure your EnergyPlus version is 9.6 for successful running.
# โ ๏ธ Download the v1_nextpart.idf file from the EPlus-LLM repo and place it in your current working directory.
import torch
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
# Load the EPlus-LLM model
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("EPlus-LLM/EPlus-LLMv1"
# , force_download=True # If you cannot download the model
)
# Generation config
generation_config = model.generation_config
generation_config.max_new_tokens = 2000
generation_config.temperature = 0.1
generation_config.top_p = 0.1
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
# Please provide your input here โ a description of the desired building
# For more details, please refer to the paper: https://doi.org/10.1016/j.apenergy.2024.123431
input="Simulate a building that is 30.00 meters long, 15.00 meters wide, and 3.50 meters high. The window-to-wall ratio is 0.28. The occupancy rate is 8.00 m2/people, the lighting level is 6.00 W/m2, and the equipment power consumption is 8.80 W/m2."
input_ids = tokenizer(input, return_tensors="pt", truncation=False)
generated_ids = model.generate(input_ids = input_ids.input_ids,
attention_mask = input_ids.attention_mask,
generation_config = generation_config)
generated_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
generated_output = generated_output.replace("_", " ")
generated_output = generated_output.replace("|", "\n")
# Load the rest port of IDF file.
file_path = "v1_nextpart.idf" # File is in the repo, please download.
output_path = "v1_final.idf"
with open(file_path, 'r', encoding='utf-8') as file:
nextpart = file.read()
final_text = nextpart + "\n\n" + generated_output
with open(output_path, 'w', encoding='utf-8') as f:
f.write(final_text)
# Output the building energy model in IDF file
print(f"Building Energy Model Auto-Generated: {output_path}")
```
## ๐ Citation
If you find our work helpful, feel free to give us a cite.
```
@article{jiang2025EPlus-LLM,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {EPlus-LLM: A large language model-based computing platform for automated building energy modeling},
journal = {Applied Energy},
volume = {367},
pages = {123431},
year = {2024},
month = {Aug},
doi = {https://doi.org/10.1016/j.apenergy.2024.123431}}
@article{jiang2025prompting,
author = {Gang Jiang and Zhihao Ma and Liang Zhang and Jianli Chen},
title = {Prompt engineering to inform large language models in automated building energy modeling},
journal = {Energy},
volume = {316},
pages = {134548},
year = {2025},
month = {Feb},
doi = {https://doi.org/10.1016/j.energy.2025.134548}}
@article{jiang2025EPlus-LLMv2,
author = {Gang Jiang and Jianli Chen},
title = {Efficient fine-tuning of large language models for automated building energy modeling in complex cases},
journal = {Automation in Construction},
volume = {175},
pages = {106223},
year = {2025},
month = {July},
doi = {https://doi.org/10.1016/j.autcon.2025.106223}}
```
|
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757919583
|
svarekagerp
| 2025-09-15T07:01:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing reptilian bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T07:00:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing reptilian bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kimssai/sk-a.x-4.0-light-8bit
|
kimssai
| 2025-09-15T06:59:10Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-15T06:57:46Z |
# sk-a.x-4.0-light-8bit
## ๋ชจ๋ธ ์ค๋ช
์ด ๋ชจ๋ธ์ SK Telecom์ A.X-4.0-Light๋ฅผ 8-bit๋ก ์์ํํ ๋ฒ์ ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: skt/A.X-4.0-Light
- **์์ํ**: 8-bit (BitsAndBytesConfig)
- **๋ชจ๋ธ ํฌ๊ธฐ**: ~13.5GB
- **๋ฉ๋ชจ๋ฆฌ ์ ์ฝ**: ์๋ณธ ๋๋น ์ฝ 50% ๊ฐ์
## ์ฌ์ฉ๋ฒ
### ๊ธฐ๋ณธ ์ฌ์ฉ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
# ํ ํฌ๋์ด์ ๋ก๋
tokenizer = AutoTokenizer.from_pretrained("kimssai/sk-a.x-4.0-light-8bit")
# ์์ํ ์ค์
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False
)
# ๋ชจ๋ธ ๋ก๋
model = AutoModelForCausalLM.from_pretrained(
"kimssai/sk-a.x-4.0-light-8bit",
quantization_config=quantization_config,
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True
)
# ํ
์คํธ ์์ฑ
prompt = "์๋
ํ์ธ์!"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### LoRA ์ด๋ํฐ์ ํจ๊ป ์ฌ์ฉ
```python
from peft import PeftModel
# LoRA ์ด๋ํฐ ๋ก๋
model = PeftModel.from_pretrained(model, "path/to/lora/adapter")
```
## ์์ํ ์ค์
- **llm_int8_threshold**: 6.0
- **llm_int8_has_fp16_weight**: False
- **skip_modules**: ["lm_head", "embed_tokens"]
## ์์คํ
์๊ตฌ์ฌํญ
- **GPU ๋ฉ๋ชจ๋ฆฌ**: ์ต์ 14GB
- **Python**: 3.8+
- **PyTorch**: 2.0+
- **Transformers**: 4.35+
- **BitsAndBytesConfig**: 0.41+
## ๋ผ์ด์ ์ค
๋ฒ ์ด์ค ๋ชจ๋ธ์ ๋ผ์ด์ ์ค๋ฅผ ๋ฐ๋ฆ
๋๋ค.
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ 8-bit ์์ํ๋์ด ์์ด ์๋ณธ ๋ชจ๋ธ๊ณผ ์ฝ๊ฐ์ ์ฑ๋ฅ ์ฐจ์ด๊ฐ ์์ ์ ์์ต๋๋ค.
- GPU ํ๊ฒฝ์์์ ์ฌ์ฉ์ ๊ถ์ฅํฉ๋๋ค.
|
NgQuocThai/whisper-large-v2-30s-final
|
NgQuocThai
| 2025-09-15T06:57:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-14T07:27:17Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-30s-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-30s-final
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5711
- Cer: 14.4843
- Wer: 25.0120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.2819 | 1.0 | 1737 | 0.5189 | 23.9878 | 39.7700 |
| 0.7333 | 2.0 | 3474 | 0.5002 | 22.7616 | 36.0189 |
| 0.5886 | 3.0 | 5211 | 0.4789 | 21.2654 | 34.8689 |
| 0.4846 | 4.0 | 6948 | 0.4797 | 18.3889 | 30.3922 |
| 0.4034 | 5.0 | 8685 | 0.4723 | 21.4274 | 33.6368 |
| 0.3401 | 6.0 | 10422 | 0.4861 | 16.6427 | 28.2360 |
| 0.2898 | 7.0 | 12159 | 0.4987 | 15.9506 | 27.2914 |
| 0.2442 | 8.0 | 13896 | 0.5033 | 15.9706 | 27.7637 |
| 0.2083 | 9.0 | 15633 | 0.5140 | 15.2464 | 26.1003 |
| 0.1797 | 10.0 | 17370 | 0.5105 | 15.3605 | 25.9840 |
| 0.1551 | 11.0 | 19107 | 0.5205 | 15.0444 | 25.8402 |
| 0.1334 | 12.0 | 20844 | 0.5297 | 14.8864 | 25.5459 |
| 0.1169 | 13.0 | 22581 | 0.5394 | 15.0624 | 26.1209 |
| 0.1008 | 14.0 | 24318 | 0.5416 | 15.2704 | 26.0730 |
| 0.0895 | 15.0 | 26055 | 0.5511 | 14.8824 | 25.5938 |
| 0.0802 | 16.0 | 27792 | 0.5500 | 15.0644 | 26.2920 |
| 0.0721 | 17.0 | 29529 | 0.5600 | 14.6583 | 25.2721 |
| 0.0651 | 18.0 | 31266 | 0.5627 | 15.0064 | 25.7376 |
| 0.0592 | 19.0 | 33003 | 0.5649 | 14.9904 | 25.9634 |
| 0.0547 | 20.0 | 34740 | 0.5644 | 14.5583 | 25.1352 |
| 0.0509 | 21.0 | 36477 | 0.5662 | 14.6303 | 25.0873 |
| 0.0469 | 22.0 | 38214 | 0.5705 | 14.8204 | 25.2721 |
| 0.0444 | 23.0 | 39951 | 0.5711 | 14.4843 | 25.0120 |
| 0.0425 | 24.0 | 41688 | 0.5729 | 14.6563 | 25.1968 |
| 0.0422 | 25.0 | 43425 | 0.5718 | 14.5823 | 25.0667 |
### Framework versions
- Transformers 4.53.3
- Pytorch 2.7.1+cu118
- Datasets 3.6.0
- Tokenizers 0.21.2
|
soaring0616/hw1_chinese_roberta_wwm_ext_model
|
soaring0616
| 2025-09-15T06:56:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:hfl/chinese-roberta-wwm-ext",
"base_model:finetune:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2025-09-15T05:39:39Z |
---
library_name: transformers
license: apache-2.0
base_model: hfl/chinese-roberta-wwm-ext
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hw1_chinese_roberta_wwm_ext_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hw1_chinese_roberta_wwm_ext_model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.9605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1501 | 1.0 | 2715 | 0.1402 | 0.9588 |
| 0.0816 | 2.0 | 5430 | 0.1587 | 0.9638 |
| 0.0129 | 3.0 | 8145 | 0.1858 | 0.9605 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/meeting-summarizer-GGUF
|
mradermacher
| 2025-09-15T06:56:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:CodeXRyu/meeting-summarizer",
"base_model:quantized:CodeXRyu/meeting-summarizer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T06:54:20Z |
---
base_model: CodeXRyu/meeting-summarizer
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/CodeXRyu/meeting-summarizer
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#meeting-summarizer-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/meeting-summarizer-GGUF/resolve/main/meeting-summarizer.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:55:16Z | 36 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"bees",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T11:22:32Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- bees
- pruned
---
# bees_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the bees method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: bees
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: bees
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/bees_prune_Llama-3.2-1B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
hexmSeeU/RadarQA-7B
|
hexmSeeU
| 2025-09-15T06:54:54Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T05:00:08Z |
---
license: apache-2.0
---
|
Reihaneh/wav2vec2_ur_mono_50_epochs_4
|
Reihaneh
| 2025-09-15T06:54:48Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-07T19:18:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5456es/bees_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
|
5456es
| 2025-09-15T06:54:25Z | 46 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"bees",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T03:44:21Z |
---
license: apache-2.0
base_model: Qwen2.5-1.5B-Instruct
tags:
- dpo
- preference-learning
- bees
- pruned
---
# bees_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-1.5B-Instruct using the bees method.
## Model Details
- **Base Model**: Qwen2.5-1.5B-Instruct
- **Training Method**: bees
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: bees
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/bees_prune_Qwen2.5-1.5B-Instruct_prune_0.3-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T06:53:57Z | 31 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T04:21:28Z |
---
license: apache-2.0
base_model: Llama-3.2-1B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-1B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.2-1B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.2-1B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
khairi/Qwen2.5-1.5B-bnb-4bit
|
khairi
| 2025-09-15T06:53:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-09-14T11:43:45Z |
---
base_model: unsloth/qwen2.5-1.5b-bnb-4bit
library_name: transformers
model_name: Qwen2.5-1.5B-bnb-4bit
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen2.5-1.5B-bnb-4bit
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="khairi/Qwen2.5-1.5B-bnb-4bit", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/flursky/Qwen2.5-CPT/runs/2dkluwm5)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Abhimani98/finetuned-gemma-2b-code-instruct
|
Abhimani98
| 2025-09-15T06:53:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T06:52:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5456es/last_layer_prune_Qwen2.5-7B-Instruct_prune_0.6-sigmoid
|
5456es
| 2025-09-15T06:53:02Z | 25 | 0 | null |
[
"safetensors",
"qwen2",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T09:32:25Z |
---
license: apache-2.0
base_model: Qwen2.5-7B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Qwen2.5-7B-Instruct_prune_0.6-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Qwen2.5-7B-Instruct using the last method.
## Model Details
- **Base Model**: Qwen2.5-7B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Qwen2.5-7B-Instruct_prune_0.6-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid
|
5456es
| 2025-09-15T06:52:03Z | 30 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"random",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-10T03:23:38Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- random
- pruned
---
# random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the random method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: random
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: random
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/random_prune_Llama-3.2-3B-Instruct_prune_0.0-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
|
5456es
| 2025-09-15T06:50:59Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:46:33Z |
---
license: apache-2.0
base_model: Llama-3.2-3B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.2-3B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.2-3B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.2-3B-Instruct_prune_0.7-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
stimuler/qwen-adapter-asr
|
stimuler
| 2025-09-15T06:50:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-Omni-3B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Omni-3B",
"region:us"
] | null | 2025-09-15T06:50:17Z |
---
base_model: Qwen/Qwen2.5-Omni-3B
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-Omni-3B
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
mradermacher/UIGEN-T3-4B-Preview-GGUF
|
mradermacher
| 2025-09-15T06:47:05Z | 2,377 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"qwen3",
"ui-generation",
"tailwind-css",
"html",
"en",
"base_model:Tesslate/UIGEN-T3-4B-Preview",
"base_model:quantized:Tesslate/UIGEN-T3-4B-Preview",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T21:14:20Z |
---
base_model: Tesslate/UIGEN-T3-4B-Preview
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- qwen3
- ui-generation
- tailwind-css
- html
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Tesslate/UIGEN-T3-4B-Preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UIGEN-T3-4B-Preview-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UIGEN-T3-4B-Preview-GGUF/resolve/main/UIGEN-T3-4B-Preview.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
coastalcph/Llama-2-7b-chat-1t_gsm8k-1t_hh_diff_alpaca_375exs
|
coastalcph
| 2025-09-15T06:46:56Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-15T06:44:37Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs")
t_combined = 1.0 * t_1 + 1.0 * t_2 - 1.0 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "meta-llama/Llama-2-7b-chat-hf",
"finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4",
"finetuned_model2": "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs",
"finetuned_model3": "coastalcph/Llama-2-7b-chat-helpful-alpaca-375exs",
"output_model_name": "coastalcph/Llama-2-7b-chat-1t_gsm8k-1t_hh_diff_alpaca_375exs",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 1.0,
"scale_t3": 1.0
}
|
GYUHYUK/new_gemma_health
|
GYUHYUK
| 2025-09-15T06:46:49Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:11:27Z |
---
license: apache-2.0
---
|
5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.5-sigmoid
|
5456es
| 2025-09-15T06:46:00Z | 0 | 0 | null |
[
"safetensors",
"llama",
"dpo",
"preference-learning",
"last",
"pruned",
"license:apache-2.0",
"region:us"
] | null | 2025-09-15T06:35:01Z |
---
license: apache-2.0
base_model: Llama-3.1-8B-Instruct
tags:
- dpo
- preference-learning
- last
- pruned
---
# last_layer_prune_Llama-3.1-8B-Instruct_prune_0.5-sigmoid
This model is a DPO (Direct Preference Optimization) fine-tuned version of Llama-3.1-8B-Instruct using the last method.
## Model Details
- **Base Model**: Llama-3.1-8B-Instruct
- **Training Method**: last
- **Pruning Ratio**: unknown
- **Training Date**: 2025-09-15
## Training Configuration
This model was trained using Direct Preference Optimization (DPO) with the following characteristics:
- Method: last
- Pruning applied during training
- Fine-tuned on preference data
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "5456es/last_layer_prune_Llama-3.1-8B-Instruct_prune_0.5-sigmoid"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Data
This model was trained on preference data using the DPO algorithm.
## Limitations
This model inherits the limitations of its base model and may have additional limitations due to the pruning process.
## Citation
If you use this model, please cite the original DPO paper and the base model.
|
felixZzz/32b_len16k_custom_teacher_custom_student_reject_mix-0913
|
felixZzz
| 2025-09-15T06:44:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:08:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saimqureshi656/mms-urd-arabic-training
|
saimqureshi656
| 2025-09-15T06:43:30Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T19:10:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sabirjdjdjd/Qwen3-0.6B-Gensyn-Swarm-alert_agile_komodo
|
sabirjdjdjd
| 2025-09-15T06:42:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am alert_agile_komodo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:42:38Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am alert_agile_komodo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harpertoken/harpertokenASR
|
harpertoken
| 2025-09-15T06:41:16Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"code",
"audio",
"speech-recognition",
"wav2vec2",
"en",
"dataset:facebook/multilingual_librispeech",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-24T18:18:00Z |
---
license: mit
language:
- en
datasets:
- facebook/multilingual_librispeech
metrics:
- character
base_model:
- openai/whisper-small
- facebook/wav2vec2-base-960h
pipeline_tag: automatic-speech-recognition
library_name: transformers
tags:
- code
- audio
- speech-recognition
- whisper
- wav2vec2
- pytorch
---
# Speech Recognition AI: Fine-Tuned Whisper and Wav2Vec2 for Real-Time Audio
This project fine-tunes OpenAI's Whisper (`whisper-small`) and Facebook's Wav2Vec2 (`wav2vec2-base-960h`) models for real-time speech recognition using live audio recordings. Itโs designed for dynamic environments where low-latency transcription is key, such as live conversations or streaming audio.
## Model Description
Fine-tuned Whisper and Wav2Vec2 models for real-time speech recognition on live audio.
## Features
- **Real-time audio recording**: Captures live 16kHz mono audio via microphone input.
- **Continuous fine-tuning**: Updates model weights incrementally during live sessions.
- **Speech-to-text transcription**: Converts audio to text with high accuracy.
- **Model saving/loading**: Automatically saves fine-tuned models with timestamps.
- **Dual model support**: Choose between Whisper and Wav2Vec2 architectures.
## Usage
### Start Fine-Tuning
Fine-tune the model on live audio:
```bash
# For Whisper model
python main.py --model_type whisper
# For Wav2Vec2 model
python main.py --model_type wav2vec2
```
Records audio in real-time and updates the model continuously. Press Ctrl+C to stop training and save the model automatically.
### Transcription
Test the fine-tuned model:
```bash
# For Whisper model
python test_transcription.py --model_type whisper
# For Wav2Vec2 model
python test_transcription.py --model_type wav2vec2
```
Records 5 seconds of audio (configurable in code) and generates a transcription.
### Model Storage
Models are saved by default to:
```
models/speech_recognition_ai_fine_tune_[model_type]_[timestamp]
```
Example: `models/speech_recognition_ai_fine_tune_whisper_20250225`
To customize the save path:
```bash
export MODEL_SAVE_PATH="/your/custom/path"
python main.py --model_type [whisper|wav2vec2]
```
## Requirements
- Python 3.8+
- PyTorch (torch==2.0.1 recommended)
- Transformers (transformers==4.35.0 recommended)
- Sounddevice (sounddevice==0.4.6)
- Torchaudio (torchaudio==2.0.1)
A GPU is recommended for faster fine-tuning. See `requirements.txt` for the full list.
## Model Details
- **Task**: Automatic Speech Recognition (ASR)
- **Base Models**:
- Whisper: openai/whisper-small
- Wav2Vec2: facebook/wav2vec2-base-960h
- **Fine-tuning**: Trained on live 16kHz mono audio recordings with a batch size of 8, using the Adam optimizer (learning rate 1e-5).
- **Input**: 16kHz mono audio
- **Output**: Text transcription
- **Language**: English
## Loading the Model (Hugging Face)
To load the models from Hugging Face:
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
model = WhisperForConditionalGeneration.from_pretrained("harpertoken/harpertokenASR")
processor = WhisperProcessor.from_pretrained("harpertoken/harpertokenASR")
```
## Repository Structure
```
speech-model/
โโโ dataset.py # Audio recording and preprocessing
โโโ train.py # Training pipeline
โโโ test_transcription.py # Transcription testing
โโโ main.py # Main script for fine-tuning
โโโ README.md # This file
โโโ requirements.txt # Dependencies
```
## Training Data
The models are fine-tuned on live audio recordings collected during runtime. No pre-existing dataset is requiredโusers generate their own data via microphone input.
## Evaluation Results
Future updates will include WER (Word Error Rate) metrics compared to base models.
## License
Licensed under the MIT License.
|
KawgKawgKawg/Manila-Urban-Expansion
|
KawgKawgKawg
| 2025-09-15T06:36:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-14T16:25:15Z |
# Manila-Urban-Expansion-Detection
A machine learning web application for predicting urban areas from satellite imagery spectral data. This tool uses a pre-trained Random Forest model to classify urban and non-urban areas based on Landsat spectral features.
๐ Features
๐ CSV-based Prediction: Upload CSV files with spectral features for urban classification
๐ฏ Pre-trained Model: Uses a Random Forest classifier trained on Manila urban data
๐ Interactive Visualizations: Multiple charts and graphs for result analysis
๐ฑ Web Interface: User-friendly Gradio interface
๐ฅ Download Results: Export predictions as CSV files
๐ Spatial Analysis: Optional geographic coordinate support
๐ช Confidence Scoring: Quality assessment for each prediction
๐๏ธ Technology Stack
Technology Purpose Version
Python Backend language 3.8+
Gradio Web interface framework โฅ3.50.0
Scikit-learn Machine learning library โฅ1.0.0
Pandas Data processing โฅ1.3.0
NumPy Numerical computations โฅ1.21.0
Matplotlib Data visualization โฅ3.5.0
Pickle Model serialization Built-in
Hugging Face Deployment platform -
๐ Required CSV Format
Essential Columns:
```csv
B1_coastal,B2_blue,B3_green,B4_red,B5_nir,B6_swir1,B7_swir2,NDVI,NDBI,NDWI,brightness,ratio_swir_nir,ratio_nir_red
```
Optional Columns:
```csv
longitude,latitude # For spatial visualization
```
Example CSV Structure:
```csv
B1_coastal,B2_blue,B3_green,B4_red,B5_nir,B6_swir1,B7_swir2,NDVI,NDBI,NDWI,brightness,ratio_swir_nir,ratio_nir_red
0.123,0.145,0.167,0.189,0.234,0.456,0.378,0.234,0.456,0.123,0.289,1.234,1.456
0.134,0.156,0.178,0.201,0.245,0.467,0.389,0.245,0.467,0.134,0.301,1.245,1.467
```
๐ Installation & Setup
Local Development:
Clone the repository:
```bash
git clone <your-repo-url>
cd satellite-urban-prediction
```
Create virtual environment:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
Install dependencies:
```bash
pip install -r requirements.txt
```
Add your trained model:
```bash
# Place your trained model file in the root directory
# File should be named: model.pkl
Run the application:
python final_app.py
```
๐ฏ Model Training Information
Expected Features:
The model expects 13 spectral features in this exact order:
B1_coastal - Coastal aerosol band
B2_blue - Blue band
B3_green - Green band
B4_red - Red band
B5_nir - Near Infrared band
B6_swir1 - Short-wave Infrared 1
B7_swir2 - Short-wave Infrared 2
NDVI - Normalized Difference Vegetation Index
NDBI - Normalized Difference Built-up Index
NDWI - Normalized Difference Water Index
brightness - Average brightness
ratio_swir_nir - SWIR to NIR ratio
ratio_nir_red - NIR to Red ratio
Model Architecture:
Algorithm: Random Forest Classifier
Trees: 100 estimators
Max Depth: 10 levels
Training Data: Manila urban/rural areas
Accuracy: >85% on test data
๐ Output Results
Visual Outputs:
Prediction Distribution - Bar chart of urban vs non-urban predictions
Probability Distribution - Histogram of prediction confidence
Spatial Distribution - Geographic plot (if coordinates provided)
Confidence Levels - Quality assessment of predictions
Data Outputs:
Prediction Label: Urban/Non-Urban classification
Probability Score: Confidence score (0-1)
Confidence Level: Qualitative assessment (Low/Medium/High/Very High)
Geographic Coordinates: If provided in input
Model Results:
- We evaluated five models (Logistic Regression, Random Forest, Decision Tree, K-Nearest Neighbors, and SVC) using Randomized Search with 5-fold cross-validation, optimizing for F1 score.
- Best Model: Random Forest Classifier
- Best Parameters: 200 estimators, max depth = 20, min samples split = 5, min samples leaf = 4
- Performance: Accuracy (0.9996), Precision (0.9965), Recall (1.0000), F1-Score (0.9982), ROC-AUC (1.0000)
- The Random Forest model demonstrated near-perfect performance with only 4 errors out of 10,000 samples (0.04%), all of which were false urban classifications. No false non-urban errors were observed.
This indicates that the model is highly reliable for detecting urban expansion in Manila, though a slight threshold adjustment may reduce false positives further.
Downloadable Files:
Complete results CSV with all predictions
Preserves all original input data plus predictions
๐ฎ How to Use
Prepare Your Data:
Collect spectral data from Landsat imagery
Calculate required indices (NDVI, NDBI, NDWI)
Format as CSV with expected column names
Run Prediction:
Upload CSV file through the web interface
Click "Predict Urban Areas"
View interactive results and visualizations
Analyze Results:
Review prediction statistics
Examine confidence levels
Download results for further analysis
Interpret Results:
Urban areas: High NDBI, moderate brightness
Non-urban: High NDVI (vegetation) or other features
Confidence scores indicate prediction reliability
๐ง Customization
Modifying Expected Features:
Edit the expected_features list in app.py:
```python
expected_features = [
'B1_coastal', 'B2_blue', 'B3_green', 'B4_red',
'B5_nir', 'B6_swir1', 'B7_swir2',
'NDVI', 'NDBI', 'NDWI', 'brightness',
'ratio_swir_nir', 'ratio_nir_red'
]
```
Adding New Visualizations:
Extend the plotting section in predict_urbanization_csv() function:
```python
# Add new subplot
ax5 = plt.subplot(2, 3, 5) # Adjust grid as needed
ax5.plot(new_data)
ax5.set_title('New Visualization')
```
Model Replacement:
Replace model.pkl with your new model file. Ensure it has:
.model attribute: Trained classifier
.scaler attribute: Fitted StandardScaler
.feature_names attribute: List of expected features
๐ Troubleshooting
Common Issues:
Missing Model File:
```text
โ Pickle file urban_model.pkl not found
Solution: Ensure urban_model.pkl is in the root directory
```
CSV Format Error:
```text
โ Missing features in CSV: B5_nir, NDVI, ...
Solution: Check column names match expected features
```
```text
Memory Issues:
Solution: Reduce sample size or upgrade Hugging Face Space hardware
```
```text
Visualization Errors:
Solution: Check for NaN values in input data
```
```text
Performance Tips:
Use smaller CSV files for testing (<10,000 rows)
```
Pre-calculate spectral indices before upload
Ensure numeric columns don't contain text values
Handle missing values before upload
๐ Example Use Cases
Urban Planning:
Monitor urban expansion over time
Identify potential development areas
Assess urban density patterns
Environmental Research:
- Study urban heat island effects
- Analyze vegetation loss in urban areas
- Monitor water body changes near cities
Academic Projects:
- Remote sensing coursework
- Machine learning demonstrations
- Geographic information systems (GIS) studies
๐ค Contributing
Fork the repository
Create a feature branch
Make your changes
Test thoroughly
Submit a pull request
Development Priorities:
Add support for multiple model types
Implement batch processing for large files
Add temporal analysis capabilities
Include more visualization options
Support for additional satellite data formats
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
Landsat Program for satellite imagery data
Scikit-learn team for machine learning tools
Gradio team for the web framework
Hugging Face for deployment platform
๐ Support
For questions and support:
Check the troubleshooting section above
Review example CSV formats
Ensure model file is properly formatted
Verify all dependencies are installed
โญ If you find this project useful, please give it a star on GitHub!
Built with โค๏ธ for urban planning, environmental research and financial forecasting
|
HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
|
HectorHe
| 2025-09-15T06:36:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HectorHe/math7k",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:15:24Z |
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: HectorHe/math7k
library_name: transformers
model_name: Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/g2mj6405)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tamewild/4b_v98_merged_e3
|
tamewild
| 2025-09-15T06:35:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T06:34:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lakshmi26/code-search-net-tokenizer
|
Lakshmi26
| 2025-09-15T06:34:48Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T06:34:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.