modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Emry-Rausch/medical-question-model | Emry-Rausch | 2025-04-28T02:32:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T02:32:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gg-cse476/phase2 | gg-cse476 | 2025-04-28T02:32:13Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-04-28T02:30:22Z | # GG Team Instruction-Tuned Adapters (LLaMA 3.2-3B)
This repository provides a collection of PEFT adapters (LoRA) trained on various instruction-tuning datasets using the base model **LLaMA 3.2-3B**. These adapters are developed by **GG Team - CSE476 @ Arizona State University**.
## Adapter Variants
| Folder | Dataset(s) Used | Description |
|--------|------------------|-------------|
| `llama-3.2-3B-sft` | Alpaca | Fine-tuned only on the original Alpaca dataset |
| `llama-3.2-3B-sft-dolly` | Alpaca + Dolly | Fine-tuned on Databricks' Dolly dataset |
| `llama-3.2-3B-sft-FLAN` | Alpaca + Dolly + FLAN | Fine-tuned on FLAN and Alpaca mixed |
| `sft_a_d` | Alpaca + Dolly | Combined dataset fine-tuning (Alpaca + Dolly) |
| `sft_a_d1` | Alpaca(cleaned) + Dolly | Combined dataset fine-tuning (Alpaca + Dolly) |
---
## 🛠️ Usage (with `peft`)
Here's an example of loading one of the adapters using 🤗 Transformers and PEFT:
```python
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-3B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-3B")
# Load adapter (choose one)
model = PeftModel.from_pretrained(base_model, "gg-cse476/gg/sft_a_d")
# Inference
prompt = "Explain how a rocket works in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
Jongsim/Qwen2.5-72B-Instruct-abliterated-4.5bpw-exl2 | Jongsim | 2025-04-28T02:24:02Z | 8 | 1 | null | [
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-72B-Instruct",
"license:other",
"region:us"
] | text-generation | 2024-10-09T08:25:23Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B-Instruct
tags:
- chat
---
Abliterated version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), utilizing code from [refusal_direction](https://github.com/andyrdt/refusal_direction).
For more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
[GGUF](https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated-GGUF) |
Sofia-gb/fashionSigLIP-roturas13 | Sofia-gb | 2025-04-28T02:24:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-04-28T02:23:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
steven8274/orpheus-tts-3b-zh-finetuned | steven8274 | 2025-04-28T02:23:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T02:14:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Selssabil/News-Recommender-MIND-LAST-VRESION-4-2025 | Selssabil | 2025-04-28T02:22:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T02:22:05Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Selssabil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jtromero/qwen2-0.5b-phase3-csn-lora-ff | jtromero | 2025-04-28T02:20:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:52:15Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-0.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
sicto/sicto-vocal-separator | sicto | 2025-04-28T02:16:54Z | 0 | 0 | null | [
"safetensors",
"music_source_separator",
"audio",
"source-separation",
"music",
"pytorch",
"en",
"dataset:musdb18hq",
"arxiv:2402.17701",
"license:mit",
"region:us"
] | null | 2025-04-28T01:25:09Z | ---
language: en
tags:
- audio
- source-separation
- music
- pytorch
license: mit
datasets:
- musdb18hq
metrics:
- sdr
- si-sdr
---
# Model Card for HSTasnet
This model performs music source separation using a Hybrid Spectrogram Transformer architecture (HSTasnet) to separate different instruments from mixed audio.
## Model Details
### Model Description
HSTasnet is a hybrid spectrogram transformer model for music source separation that combines both time and frequency domain processing. It uses parallel time-domain and frequency-domain encoders followed by RNN-based memory modules to process audio at multiple scales. The model merges these complementary representations through a hybrid RNN layer before generating masks for source separation.
- **Developed by:** Authors of "HSTasnet: A Hybrid Spectrogram Transformer for Music Source Separation"
- **Model type:** Transformer-based Source Separation
- **License:** MIT
- **Paper:** [HSTasnet: A Hybrid Spectrogram Transformer for Music Source Separation](https://arxiv.org/abs/2402.17701)
### Model Sources
- **Repository:** [burstMembrane/hstasnet](https://github.com/burstMembrane/hstasnet)
- **Paper:** [arXiv:2402.17701](https://arxiv.org/abs/2402.17701)
## Uses
### Direct Use
The model can be used to separate music tracks into their constituent instruments (vocals, drums, bass, and other). It's particularly useful for:
- Music production and remixing
- Audio analysis and research
- Creating karaoke tracks
- Isolating specific instruments for practice or study
- Isolating instruments for downstream tasks like transcription, alignment, etc.
## How to Get Started with the Model
```bash
# Example usage with the SheetMuse training framework
sm-train --model hstasnet \
--results_path results \
--data_path /path/to/training/data \
--config configs/config_moisesdb_hstasnet.yaml
```
To use the pretrained model
```bash
pip install [email protected]:mattstepincto/sheetmuse-training.git
```
Then run the `separate_file` method after importing th pretrained model. Note you will need a HF API token an daccess to the bitbucket repository
```python
from sheetmuse_training.hf.smsourceseparator import SMSourceSeparator
model = SMSourceSeparator.from_pretrained("sicto/hstasnet", token="sicto/hf/read/token")
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(args.device)
model.eval()
output = model.separate_file(
# the input file e.g mixture.wav
file_path,
# the folder to save the output to e.g out
savedir=savedir,
# a list of instruments used for file naming, e.g ["drums, "bass", "other", "vocals"
instruments=model.instruments,
# the device to use for inference
device=args.device,
)
# output shape will be [batch_size (1), n_instruments, n_channels, n_samples]
print(f"Output shape: {output.shape}")
```
## Training Details
### Training Data
The model is typically trained on the MUSDB18-HQ dataset, which contains:
- 150 songs (86 for training, 14 for validation, 50 for testing)
- High-quality audio at 44.1kHz
- Separate stems for vocals, drums, bass, and other instruments
### Training Procedure
#### Training Hyperparameters
- **Optimizer:** AdamW
- **Learning Rate:** 1.43e-4
- **Batch Size:** 24
- **Number of Epochs:** 100
- **Patience:** 5 (for learning rate reduction)
- **Reduce Factor:** 0.8
- **Gradient Clipping:** 7.0
- **Mixed Precision Training:** Enabled
- **Gradient Accumulation Steps:** 1
### Evaluation
#### Metrics
The model is evaluated using two metrics:
- Signal-to-Distortion Ratio (SDR)
- L1 Frequency Loss
#### Results
Typical performance metrics on MUSDB18-HQ test set:
- SDR: ~5.1 dB (average across all instruments)
With extra data:
- SDR: ~5.7 dB (average across all instruments)
## Technical Specifications
### Model Architecture
HSTasnet implements a hybrid architecture combining:
1. **Time Domain Processing**:
- Time encoder with window size 1024 and hop size 512
- RNN hidden dimension of 768
- RNN-based memory module for temporal processing
- Skip connections and mask generation
2. **Frequency Domain Processing**:
- STFT-based encoder (1024-point FFT, hop size 512, Hamming window)
- Parallel RNN memory module
- Complementary mask generation
3. **Audio Processing Parameters**:
- Sample rate: 44.1kHz
- Number of channels: 2 (stereo)
- Chunk size: 262,144 samples
- Processing 4 sources: drums, bass, other, vocals
4. **Augmentation Strategy**:
- Channel shuffling (50% probability)
- Random polarity inversion (50% probability)
- Source-specific augmentations:
- Vocals: Pitch shifting (±5 semitones), EQ (±9dB), distortion
- Bass: Pitch shifting (±2 semitones), EQ (-3/+6dB), distortion
- Drums: Pitch shifting (±5 semitones), EQ (±9dB), distortion
- Other: Pitch shifting (±4 semitones), noise injection, time stretching (0.8-1.25x)
### Compute Infrastructure
#### Hardware Requirements
- Minimum 16GB GPU memory
- Recommended: NVIDIA 3090 or similar
- CPU, MPS inference supported but slower
#### Software Requirements
- Python 3.8+
- PyTorch 1.10+
- torchaudio for STFT operations
- pytorch_lightning for training
- Additional dependencies listed in requirements.txt
### Input Requirements
- Audio format: Waveform tensor of shape [Batch, Channels, Length]
- Supported sample rates: 44.1kHz (default)
- Supports both mono and stereo inputs
- Variable length processing with optional padding
### Output Format
- Separated sources: Tensor of shape [Batch, Sources, Channels, Length]
- Maintains input sample rate and channel configuration
- Optional length matching through zero-padding
## Citation
**BibTeX:**
```bibtex
@article{hstasnet2024,
title={Real-time Low-latency Music Source Separation using Hybrid Spectrogram-TasNet},
author={[Satvik Venkatesh, Arthur Benilov, Philip Coleman, Frederic Roskam]},
journal={arXiv preprint arXiv:2402.17701},
year={2024}
}
```
## Model Card Contact
For questions about the model card, please open an issue in the repository.
|
dkhanh/SmolVLM-500M-Instruct-earth-v0 | dkhanh | 2025-04-28T02:14:24Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model:adapter:HuggingFaceTB/SmolVLM-500M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T02:14:16Z | ---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-500M-Instruct
tags:
- generated_from_trainer
model-index:
- name: SmolVLM-500M-Instruct-earth-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM-500M-Instruct-earth-v0
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
SKNahin/BQTrans-new-cont-4 | SKNahin | 2025-04-28T02:13:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T14:08:51Z | ---
library_name: transformers
tags:
- llama-factory
- generated_from_trainer
model-index:
- name: BQTrans-new-cont-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BQTrans-new-cont-4
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 55555
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 10
- total_train_batch_size: 200
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mlfoundations-dev/c1_code_10d_4s_1k | mlfoundations-dev | 2025-04-28T02:09:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:36:37Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_10d_4s_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_10d_4s_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_10d_4s_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/d1_science_longest_10k | mlfoundations-dev | 2025-04-28T02:09:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T02:05:39Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_longest_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_longest_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_longest_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
mlfoundations-dev/c1_code_10d_16s_1k | mlfoundations-dev | 2025-04-28T02:08:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:37:40Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_10d_16s_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_10d_16s_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_10d_16s_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
minket06/Qwen2.5-14B-Instruct-Q4_K_M-GGUF | minket06 | 2025-04-28T02:08:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T02:07:28Z | ---
base_model: Qwen/Qwen2.5-14B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# minket06/Qwen2.5-14B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-14B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo minket06/Qwen2.5-14B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo minket06/Qwen2.5-14B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo minket06/Qwen2.5-14B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo minket06/Qwen2.5-14B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-q4_k_m.gguf -c 2048
```
|
ieayvaz/turkish-qwen2.5-3b-instruct-gsm8k-tr-grpo-think | ieayvaz | 2025-04-28T02:06:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:28:42Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: turkish-qwen2.5-3b-instruct-gsm8k-tr-grpo-think
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for turkish-qwen2.5-3b-instruct-gsm8k-tr-grpo-think
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ieayvaz/turkish-qwen2.5-3b-instruct-gsm8k-tr-grpo-think", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ayvaz4252-y-ld-z-technical-university/turkish-qwen-grpo-math/runs/wf07mmhd)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Zotcho/medical-question-model | Zotcho | 2025-04-28T02:06:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T02:06:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VadimHammer/my_shakespeare_model | VadimHammer | 2025-04-28T02:03:24Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-27T21:30:29Z | # 🧠 Abschlussprojekt: Entwicklung eines eigenen Sprachmodells
README.md (English version)
# Autoregressive Language Model (Decoder-only Transformer, GPT-style)
This project implements an **autoregressive language model** based on a **decoder-only Transformer** architecture (GPT-style) using **PyTorch**.
The model was trained on Shakespearean text and is designed for text generation tasks.
## ✨ Key Features
- Based on **PyTorch**.
- Uses **TransformerDecoderLayer** as the core building block.
- Training optimized with **CrossEntropyLoss**.
- Model checkpoints saved at **every epoch**.
- Final model saved separately.
- Simple inference script for text generation.
- Generation supports **temperature** and **top-k sampling**.
## 🏗️ Model Architecture
| Component | Description |
|---------------------|--------------------------------------------------------|
| Token Embedding | Embedding of input tokens into vector space |
| Position Embedding | Embedding of positional information |
| Transformer Blocks | Stack of multiple `TransformerDecoderLayer` modules |
| Output Layer | Linear projection to vocabulary size |
The architecture resembles the GPT-style models:
**Input tokens → Embeddings → Transformer Decoder Blocks → Output logits → Softmax probabilities.**
## ⚙️ Main Hyperparameters
| Parameter | Meaning |
|-------------------|----------------------------------------------|
| MAX_SEQ_LEN | Maximum input sequence length (e.g., 128) |
| d_model | Dimension of token embeddings |
| nhead | Number of attention heads |
| num_layers | Number of TransformerDecoder layers |
| dim_feedforward | Size of feed-forward network inside layers |
| dropout | Dropout rate inside the model |
| learning_rate | Initial learning rate for Adam optimizer |
## 🧠 Loss Function
The model uses **CrossEntropyLoss**:
- It compares the model's output logits with the ground-truth next tokens.
- Standard choice for language modeling tasks.
- Encourages the model to predict the correct next token given previous tokens.
## 🧪 Training Details
During training:
- **Each epoch**: the model saves a checkpoint (`model_epoch_X.pt`).
- **Final model** is also saved.
- The learning rate decreases after the 8th epoch to stabilize training.
- Evaluation is performed using validation loss (`Val Loss`).
**Example of Training Progress:**
| Epoch | Train Loss | Val Loss | Learning Rate |
|------|------------|----------|---------------|
| 1 | 6.3750 | 5.5461 | 0.000300 |
| 2 | 5.1345 | 4.9006 | 0.000300 |
| 3 | 4.6538 | 4.6558 | 0.000300 |
| 4 | 4.3609 | 4.5320 | 0.000300 |
| 5 | 4.1176 | 4.4622 | 0.000300 |
| ... | ... | ... | ... |
## 🎯 Inference (Text Generation)
**Sample prompt and generation:**
Prompt:
I have a lot more to learn about these people today.
Generated text:
KING RICHARD III: And I see of all my heart.
STANLEY: Is my hand and his face?
KING RICHARD III: Why be not be the Earl of what news; and for your heart Hath the Earl of thy love in mine, Let'st: ...
Another example:
Prompt:
I swear by the name of the king
Generated text:
The heart of it off from the fire, which Would so to his own love from these thy fortune:
All that doth they do give; For this time for not have done withal, Let's not like an old women of war,
The time that I must bring the one of a gentle breath ...
## 📦 Model Files
- Models are saved at each epoch as `model_epoch_X.pt`.
- Final selected model should be uploaded to Hugging Face.
## 🗺️ Model Diagram (Mini-Map)
Here is a high-level map of the model:
Input tokens ↓ Token Embedding ↓ Positional Embedding ↓ TransformerDecoder Layers (stacked) ↓
Linear Layer (to vocab size) ↓ Logits (before Softmax)
---
## 📊 Analysis of the Latest Experiments
After analyzing the final training runs:
- **Best performing configuration**:
- `MAX_SEQ_LEN = 128`
- `d_model = 384`
- `nhead = 8`
- `num_layers = 8`
- `dim_feedforward = 2048`
- `dropout = 0.1`
- `learning_rate = 3e-4`
- Achieved:
- **Train Loss ≈ 3.7**
- **Val Loss ≈ 4.38** after 5 epochs.
**Key Observations:**
- Increasing `d_model` from 256 to 384 improved quality.
- Reducing `dropout` to `0.1` gave a more stable convergence.
- Larger models (e.g., `d_model = 512`) struggled to converge due to overfitting and resource constraints.
- Sequence length `128-256` is optimal for this dataset.
## ✅ Recommendations
- Use the model trained with the parameters listed above.
- Optionally fine-tune further on a larger Shakespeare corpus.
- Consider beam search for even higher generation quality if desired.
---
## 🤝 Acknowledgments
- Based on principles described in the original GPT papers.
- Inspired by HuggingFace Transformers library.
- Special thanks to the guidance during experiments and optimization process.
---
# 🚀 Good Luck and Happy Deploying!
|
BlandAIOrg/text_to_speech | BlandAIOrg | 2025-04-28T02:00:17Z | 134 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:adapter:unsloth/orpheus-3b-0.1-ft",
"region:us"
] | null | 2025-03-31T22:15:47Z | ---
base_model: unsloth/orpheus-3b-0.1-ft
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
hypaai/wspr_wazobia_run1_04272025 | hypaai | 2025-04-28T01:55:21Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ig",
"yo",
"en",
"ha",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-27T17:58:47Z | ---
library_name: transformers
language:
- ig
- yo
- en
- ha
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: wspr_wazobia_run1_04272025
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wspr_wazobia_run1_04272025
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 12000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
MrRobotoAI/D6 | MrRobotoAI | 2025-04-28T01:52:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/B2",
"base_model:merge:MrRobotoAI/B2",
"base_model:MrRobotoAI/D2",
"base_model:merge:MrRobotoAI/D2",
"base_model:MrRobotoAI/D5",
"base_model:merge:MrRobotoAI/D5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:49:06Z | ---
base_model:
- MrRobotoAI/D2
- MrRobotoAI/D5
- MrRobotoAI/B2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/D2](https://huggingface.co/MrRobotoAI/D2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/D5](https://huggingface.co/MrRobotoAI/D5)
* [MrRobotoAI/B2](https://huggingface.co/MrRobotoAI/B2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/B2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/D5
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/D2
dtype: bfloat16
```
|
bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF | bartowski | 2025-04-28T01:46:44Z | 0 | 0 | null | [
"gguf",
"nvidia",
"math",
"text-generation",
"en",
"dataset:nvidia/OpenMathReasoning",
"base_model:nvidia/OpenMath-Nemotron-14B-Kaggle",
"base_model:quantized:nvidia/OpenMath-Nemotron-14B-Kaggle",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T00:18:46Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
datasets:
- nvidia/OpenMathReasoning
language:
- en
license: cc-by-4.0
tags:
- nvidia
- math
base_model: nvidia/OpenMath-Nemotron-14B-Kaggle
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of OpenMath-Nemotron-14B-Kaggle by nvidia
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5192">b5192</a> for quantization.
Original model: https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [OpenMath-Nemotron-14B-Kaggle-bf16.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-bf16.gguf) | bf16 | 29.55GB | false | Full BF16 weights. |
| [OpenMath-Nemotron-14B-Kaggle-Q8_0.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q8_0.gguf) | Q8_0 | 15.70GB | false | Extremely high quality, generally unneeded but max available quant. |
| [OpenMath-Nemotron-14B-Kaggle-Q6_K_L.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q6_K_L.gguf) | Q6_K_L | 12.50GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q6_K.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q6_K.gguf) | Q6_K | 12.12GB | false | Very high quality, near perfect, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q5_K_L.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q5_K_L.gguf) | Q5_K_L | 10.99GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q5_K_M.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q5_K_M.gguf) | Q5_K_M | 10.51GB | false | High quality, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q5_K_S.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q5_K_S.gguf) | Q5_K_S | 10.27GB | false | High quality, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q4_K_L.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q4_K_L.gguf) | Q4_K_L | 9.57GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q4_1.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q4_1.gguf) | Q4_1 | 9.39GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [OpenMath-Nemotron-14B-Kaggle-Q4_K_M.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q4_K_M.gguf) | Q4_K_M | 8.99GB | false | Good quality, default size for most use cases, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q3_K_XL.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q3_K_XL.gguf) | Q3_K_XL | 8.61GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [OpenMath-Nemotron-14B-Kaggle-Q4_K_S.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q4_K_S.gguf) | Q4_K_S | 8.57GB | false | Slightly lower quality with more space savings, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-IQ4_NL.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-IQ4_NL.gguf) | IQ4_NL | 8.55GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [OpenMath-Nemotron-14B-Kaggle-Q4_0.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q4_0.gguf) | Q4_0 | 8.54GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [OpenMath-Nemotron-14B-Kaggle-IQ4_XS.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-IQ4_XS.gguf) | IQ4_XS | 8.12GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [OpenMath-Nemotron-14B-Kaggle-Q3_K_L.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q3_K_L.gguf) | Q3_K_L | 7.92GB | false | Lower quality but usable, good for low RAM availability. |
| [OpenMath-Nemotron-14B-Kaggle-Q3_K_M.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q3_K_M.gguf) | Q3_K_M | 7.34GB | false | Low quality. |
| [OpenMath-Nemotron-14B-Kaggle-IQ3_M.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-IQ3_M.gguf) | IQ3_M | 6.92GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [OpenMath-Nemotron-14B-Kaggle-Q3_K_S.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q3_K_S.gguf) | Q3_K_S | 6.66GB | false | Low quality, not recommended. |
| [OpenMath-Nemotron-14B-Kaggle-Q2_K_L.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q2_K_L.gguf) | Q2_K_L | 6.53GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [OpenMath-Nemotron-14B-Kaggle-IQ3_XS.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-IQ3_XS.gguf) | IQ3_XS | 6.38GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [OpenMath-Nemotron-14B-Kaggle-IQ3_XXS.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-IQ3_XXS.gguf) | IQ3_XXS | 5.95GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [OpenMath-Nemotron-14B-Kaggle-Q2_K.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-Q2_K.gguf) | Q2_K | 5.77GB | false | Very low quality but surprisingly usable. |
| [OpenMath-Nemotron-14B-Kaggle-IQ2_M.gguf](https://huggingface.co/bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF/blob/main/nvidia_OpenMath-Nemotron-14B-Kaggle-IQ2_M.gguf) | IQ2_M | 5.36GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF --include "nvidia_OpenMath-Nemotron-14B-Kaggle-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/nvidia_OpenMath-Nemotron-14B-Kaggle-GGUF --include "nvidia_OpenMath-Nemotron-14B-Kaggle-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (nvidia_OpenMath-Nemotron-14B-Kaggle-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF | mradermacher | 2025-04-28T01:46:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02",
"base_model:quantized:Nexesenex/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-27T19:43:44Z | ---
base_model: Nexesenex/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_OpenBioLLM_128K_v1.02.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
aslinguist/mistral-lora-Amis-summarization | aslinguist | 2025-04-28T01:43:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T01:43:16Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: mistral-lora-Amis-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-lora-Amis-summarization
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 3.0278 |
| 3.1079 | 2.0 | 10 | 2.6566 |
| 3.1079 | 3.0 | 15 | 2.4844 |
| 2.1821 | 4.0 | 20 | 2.3726 |
| 2.1821 | 5.0 | 25 | 2.3303 |
| 1.6491 | 6.0 | 30 | 2.3548 |
| 1.6491 | 7.0 | 35 | 2.4141 |
| 1.1291 | 8.0 | 40 | 2.5538 |
### Framework versions
- PEFT 0.15.0
- Transformers 4.51.2
- Pytorch 2.2.2+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
blairisland/blairislan | blairisland | 2025-04-28T01:35:25Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T01:35:25Z | ---
license: bigcode-openrail-m
---
|
aslinguist/llama-lora-Amis-summarization | aslinguist | 2025-04-28T01:34:20Z | 6 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-04-17T02:22:25Z | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: mistral-lora-Amis-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-lora-Amis-summarization
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 2.7474 |
| 2.8232 | 2.0 | 10 | 2.4767 |
| 2.8232 | 3.0 | 15 | 2.3684 |
| 2.2387 | 4.0 | 20 | 2.3432 |
| 2.2387 | 5.0 | 25 | 2.3335 |
| 1.921 | 6.0 | 30 | 2.3583 |
| 1.921 | 7.0 | 35 | 2.4341 |
| 1.5739 | 8.0 | 40 | 2.4894 |
### Framework versions
- PEFT 0.15.0
- Transformers 4.51.2
- Pytorch 2.2.2+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
myttt/vit-base-beans | myttt | 2025-04-28T01:31:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-04-28T01:25:33Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9774
- Loss: 0.0798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.2799 | 1.0 | 130 | 0.9624 | 0.2172 |
| 0.1304 | 2.0 | 260 | 0.9699 | 0.1272 |
| 0.1387 | 3.0 | 390 | 0.9774 | 0.0970 |
| 0.0855 | 4.0 | 520 | 0.9925 | 0.0652 |
| 0.1134 | 5.0 | 650 | 0.9774 | 0.0798 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
anassaleh218/perso_Character100_Llama-3.1-8B-bnb-4bit_tokenizer | anassaleh218 | 2025-04-28T01:30:29Z | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T01:30:24Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anassaleh218/perso_Character100_Llama-3.1-8B-bnb-4bit | anassaleh218 | 2025-04-28T01:30:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T01:30:13Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** anassaleh218
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm2_gen4_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-28T01:28:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T01:28:17Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VikramTiwari/gemma-text-to-sql | VikramTiwari | 2025-04-28T01:24:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T00:43:33Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="VikramTiwari/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
benfredj2/cv-generator002 | benfredj2 | 2025-04-28T01:21:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T10:50:53Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** benfredj2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
devJy/MIMIC-1.5B | devJy | 2025-04-28T01:15:06Z | 0 | 0 | null | [
"safetensors",
"llama",
"unsloth",
"trl",
"grpo",
"license:gpl-3.0",
"region:us"
] | null | 2025-04-27T21:00:31Z | ---
license: gpl-3.0
tags:
- unsloth
- trl
- grpo
---
|
MLLEARNERAT/ppo-LunarLander-v2 | MLLEARNERAT | 2025-04-28T01:10:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T01:10:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.95 +/- 41.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Llama-3-8B-dutch-GGUF | mradermacher | 2025-04-28T01:08:18Z | 344 | 0 | transformers | [
"transformers",
"gguf",
"ORPO",
"llama 3 8B",
"conversational",
"nl",
"dataset:BramVanroy/ultra_feedback_dutch",
"base_model:ReBatch/Llama-3-8B-dutch",
"base_model:quantized:ReBatch/Llama-3-8B-dutch",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-12-26T15:25:54Z | ---
base_model: ReBatch/Llama-3-8B-dutch
datasets:
- BramVanroy/ultra_feedback_dutch
language:
- nl
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- ORPO
- llama 3 8B
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ReBatch/Llama-3-8B-dutch
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-dutch-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-dutch-GGUF/resolve/main/Llama-3-8B-dutch.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nbeerbower/EVA-abliterated-TIES-Qwen2.5-72B | nbeerbower | 2025-04-28T01:07:31Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2306.01708",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2",
"base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2",
"base_model:Qwen/Qwen2.5-72B",
"base_model:merge:Qwen/Qwen2.5-72B",
"base_model:huihui-ai/Qwen2.5-72B-Instruct-abliterated",
"base_model:merge:huihui-ai/Qwen2.5-72B-Instruct-abliterated",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-08T03:48:51Z | ---
base_model:
- huihui-ai/Qwen2.5-72B-Instruct-abliterated
- Qwen/Qwen2.5-72B
- EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# EVA-abliterated-TIES-Qwen2.5-72B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/Qwen2.5-72B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-72B-Instruct-abliterated)
* [EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: huihui-ai/Qwen2.5-72B-Instruct-abliterated
parameters:
weight: 1
density: 1
- model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-72B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
``` |
mlfoundations-dev/d1_science_all | mlfoundations-dev | 2025-04-28T01:05:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T13:56:31Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_all
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_all dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B | nbeerbower | 2025-04-28T01:04:00Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2306.01708",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"base_model:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"base_model:merge:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"base_model:nbeerbower/Rombos-Qwen2.5-32B-lorablated",
"base_model:merge:nbeerbower/Rombos-Qwen2.5-32B-lorablated",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-23T14:41:46Z | ---
base_model:
- Qwen/Qwen2.5-32B
- nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
- nbeerbower/Rombos-Qwen2.5-32B-lorablated
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Rombos-EVAGutenberg-TIES-Qwen2.5-32B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/EVA-Gutenberg3-Qwen2.5-32B](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B)
* [nbeerbower/Rombos-Qwen2.5-32B-lorablated](https://huggingface.co/nbeerbower/Rombos-Qwen2.5-32B-lorablated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/Rombos-Qwen2.5-32B-lorablated
parameters:
weight: 1
density: 1
- model: nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-32B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
``` |
cryptoncalls/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat | cryptoncalls | 2025-04-28T01:03:36Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am stubby hardy cat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-11T00:30:16Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am stubby hardy cat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cryptoncalls/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_hardy_cat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sdfsdsssFTersr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_shy_capybara | sdfsdsssFTersr | 2025-04-28T01:03:19Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am striped shy capybara",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T23:04:47Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_shy_capybara
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am striped shy capybara
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_shy_capybara
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sdfsdsssFTersr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_shy_capybara", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
deanc247/hair | deanc247 | 2025-04-28T01:01:08Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T00:33:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HAIR
---
# Hair
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HAIR` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HAIR",
"lora_weights": "https://huggingface.co/deanc247/hair/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('deanc247/hair', weight_name='lora.safetensors')
image = pipeline('HAIR').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/deanc247/hair/discussions) to add images that show off what you’ve made with this LoRA.
|
owl-agent/all_weighted_qwen2_32B | owl-agent | 2025-04-28T01:00:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T02:57:19Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: all_weighted_qwen2_32B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all_weighted_qwen2_32B
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the math_raw_trajectory, the loong, the infinity_video and the simple_deep_searcher_filtered datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 12
- total_train_batch_size: 72
- total_eval_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/DarkHazard-v1.1-24b-i1-GGUF | mradermacher | 2025-04-28T00:58:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:FlareRebellion/DarkHazard-v1.1-24b",
"base_model:quantized:FlareRebellion/DarkHazard-v1.1-24b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-27T20:11:55Z | ---
base_model: FlareRebellion/DarkHazard-v1.1-24b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FlareRebellion/DarkHazard-v1.1-24b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/DarkHazard-v1.1-24b-i1-GGUF/resolve/main/DarkHazard-v1.1-24b.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kutches/KutchesV3 | Kutches | 2025-04-28T00:57:57Z | 0 | 0 | null | [
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:finetune:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T18:37:08Z | ---
license: apache-2.0
base_model:
- Wan-AI/Wan2.1-I2V-14B-480P
--- |
colaguo/legalclassBERTlarge | colaguo | 2025-04-28T00:52:20Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-04-27T23:35:27Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: legalclassBERTlarge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalclassBERTlarge
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8889 | 0.8 | 500 | 1.7423 |
| 1.6561 | 1.6 | 1000 | 1.5896 |
| 1.5418 | 2.4 | 1500 | 1.4876 |
| 1.4735 | 3.2 | 2000 | 1.4236 |
| 1.4205 | 4.0 | 2500 | 1.3509 |
| 1.3694 | 4.8 | 3000 | 1.3100 |
| 1.3201 | 5.6 | 3500 | 1.2715 |
| 1.2571 | 6.4 | 4000 | 1.2630 |
| 1.276 | 7.2 | 4500 | 1.2449 |
| 1.2483 | 8.0 | 5000 | 1.2222 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
tattedbootyab/666 | tattedbootyab | 2025-04-28T00:51:58Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:afl-3.0",
"region:us"
] | text-to-image | 2025-04-28T00:42:50Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/GOLD_2025-04-19_10-58-43.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: afl-3.0
---
# NUDSIE
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/tattedbootyab/666/tree/main) them in the Files & versions tab.
|
sawadogosalif/SaChi_by_Mistral | sawadogosalif | 2025-04-28T00:50:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T00:50:27Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sawadogosalif
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oabi/math_ultrachatQwen05B_dpo | oabi | 2025-04-28T00:49:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"HuggingFaceH4/ultrafeedback_binarized",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2-0.5B",
"base_model:finetune:Qwen/Qwen2-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T22:01:46Z | ---
base_model: Qwen/Qwen2-0.5B
datasets: HuggingFaceH4/ultrafeedback_binarized
library_name: transformers
model_name: Qwen/Qwen2-0.5B
tags:
- generated_from_trainer
- alignment-handbook
- HuggingFaceH4/ultrafeedback_binarized
- trl
- dpo
licence: license
---
# Model Card for Qwen/Qwen2-0.5B
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="oabi/math_ultrachatQwen05B_dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Vitality16/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_pale_tuna | Vitality16 | 2025-04-28T00:47:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am powerful pale tuna",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T22:02:06Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_pale_tuna
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am powerful pale tuna
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_pale_tuna
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Vitality16/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_pale_tuna", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AlexHung29629/mistral-small-sft-ifrl | AlexHung29629 | 2025-04-28T00:44:16Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-25T06:56:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kokovova/b55dcbdd-19c3-49bc-8222-371b47cc8de6 | kokovova | 2025-04-28T00:44:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T00:40:19Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b55dcbdd-19c3-49bc-8222-371b47cc8de6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c5e591834179f77f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c5e591834179f77f_train_data.json
type:
field_instruction: source
field_output: good-translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/b55dcbdd-19c3-49bc-8222-371b47cc8de6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/c5e591834179f77f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9d44feea-543c-4494-a09e-29717884cd47
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 9d44feea-543c-4494-a09e-29717884cd47
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b55dcbdd-19c3-49bc-8222-371b47cc8de6
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3035 | 0.0475 | 200 | 1.6451 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/91d1bcbe-1c79-4400-93d4-3b342dd318af | shibajustfor | 2025-04-28T00:44:07Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"region:us"
] | null | 2025-04-28T00:43:42Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-Math-7B-Instruct
model-index:
- name: shibajustfor/91d1bcbe-1c79-4400-93d4-3b342dd318af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/91d1bcbe-1c79-4400-93d4-3b342dd318af
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
MrRobotoAI/D5 | MrRobotoAI | 2025-04-28T00:43:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Blackroot/Llama-3-LongStory-LORA",
"base_model:merge:Blackroot/Llama-3-LongStory-LORA",
"base_model:Chat-Error/Claude-Kimiko",
"base_model:merge:Chat-Error/Claude-Kimiko",
"base_model:MrRobotoAI/D4",
"base_model:merge:MrRobotoAI/D4",
"base_model:Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b",
"base_model:merge:Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b",
"base_model:athirdpath/BigMistral-11b-GLUE_LORA",
"base_model:merge:athirdpath/BigMistral-11b-GLUE_LORA",
"base_model:automorphic/LORA_20231221_042843_philosophy",
"base_model:merge:automorphic/LORA_20231221_042843_philosophy",
"base_model:basilePlus/llama3-8b-schopenhauer",
"base_model:merge:basilePlus/llama3-8b-schopenhauer",
"base_model:hannahbillo/dpo-llama3-8b-grammar-rules",
"base_model:merge:hannahbillo/dpo-llama3-8b-grammar-rules",
"base_model:ian00000/Llama-3-8B_offensive_CoT_finetuned",
"base_model:merge:ian00000/Llama-3-8B_offensive_CoT_finetuned",
"base_model:jrahn/llama-3-8b-claudstruct-v3",
"base_model:merge:jrahn/llama-3-8b-claudstruct-v3",
"base_model:jspr/llama3-instruct-wordcel-smutrom-8k_peft",
"base_model:merge:jspr/llama3-instruct-wordcel-smutrom-8k_peft",
"base_model:jspr/llama3-instruct-wordcel-smutrom_peft",
"base_model:merge:jspr/llama3-instruct-wordcel-smutrom_peft",
"base_model:jspr/llama3-wordcel-smutrom-reorder_peft",
"base_model:merge:jspr/llama3-wordcel-smutrom-reorder_peft",
"base_model:jspr/llama3-wordcel-smutrom_peft",
"base_model:merge:jspr/llama3-wordcel-smutrom_peft",
"base_model:jspr/smut_llama_8b_32k_peft_ax",
"base_model:merge:jspr/smut_llama_8b_32k_peft_ax",
"base_model:jspr/smut_llama_8b_smut_2k_romance_1k_peft",
"base_model:merge:jspr/smut_llama_8b_smut_2k_romance_1k_peft",
"base_model:jspr/smut_llama_8b_smutromance_32k_peft",
"base_model:merge:jspr/smut_llama_8b_smutromance_32k_peft",
"base_model:sardukar/physiology-8k-llama3-8b-qlora",
"base_model:merge:sardukar/physiology-8k-llama3-8b-qlora",
"base_model:sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA",
"base_model:merge:sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA",
"base_model:surya-narayanan/human_sexuality",
"base_model:merge:surya-narayanan/human_sexuality",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:25:09Z | ---
base_model:
- MrRobotoAI/D4
- Blackroot/Llama-3-LongStory-LORA
- MrRobotoAI/D4
- sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA
- MrRobotoAI/D4
- sardukar/physiology-8k-llama3-8b-qlora
- MrRobotoAI/D4
- Chat-Error/Claude-Kimiko
- MrRobotoAI/D4
- jspr/smut_llama_8b_smutromance_32k_peft
- MrRobotoAI/D4
- jspr/llama3-wordcel-smutrom-reorder_peft
- MrRobotoAI/D4
- hannahbillo/dpo-llama3-8b-grammar-rules
- MrRobotoAI/D4
- surya-narayanan/human_sexuality
- MrRobotoAI/D4
- automorphic/LORA_20231221_042843_philosophy
- MrRobotoAI/D4
- Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b
- MrRobotoAI/D4
- ian00000/Llama-3-8B_offensive_CoT_finetuned
- MrRobotoAI/D4
- jspr/llama3-instruct-wordcel-smutrom_peft
- MrRobotoAI/D4
- jspr/smut_llama_8b_smut_2k_romance_1k_peft
- MrRobotoAI/D4
- athirdpath/BigMistral-11b-GLUE_LORA
- MrRobotoAI/D4
- jspr/llama3-instruct-wordcel-smutrom-8k_peft
- MrRobotoAI/D4
- MrRobotoAI/D4
- jspr/llama3-wordcel-smutrom_peft
- MrRobotoAI/D4
- jrahn/llama-3-8b-claudstruct-v3
- MrRobotoAI/D4
- jspr/smut_llama_8b_32k_peft_ax
- MrRobotoAI/D4
- basilePlus/llama3-8b-schopenhauer
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA](https://huggingface.co/sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [sardukar/physiology-8k-llama3-8b-qlora](https://huggingface.co/sardukar/physiology-8k-llama3-8b-qlora)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [Chat-Error/Claude-Kimiko](https://huggingface.co/Chat-Error/Claude-Kimiko)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/smut_llama_8b_smutromance_32k_peft](https://huggingface.co/jspr/smut_llama_8b_smutromance_32k_peft)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-wordcel-smutrom-reorder_peft](https://huggingface.co/jspr/llama3-wordcel-smutrom-reorder_peft)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [hannahbillo/dpo-llama3-8b-grammar-rules](https://huggingface.co/hannahbillo/dpo-llama3-8b-grammar-rules)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [surya-narayanan/human_sexuality](https://huggingface.co/surya-narayanan/human_sexuality)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [automorphic/LORA_20231221_042843_philosophy](https://huggingface.co/automorphic/LORA_20231221_042843_philosophy)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [ian00000/Llama-3-8B_offensive_CoT_finetuned](https://huggingface.co/ian00000/Llama-3-8B_offensive_CoT_finetuned)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-instruct-wordcel-smutrom_peft](https://huggingface.co/jspr/llama3-instruct-wordcel-smutrom_peft)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/smut_llama_8b_smut_2k_romance_1k_peft](https://huggingface.co/jspr/smut_llama_8b_smut_2k_romance_1k_peft)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [athirdpath/BigMistral-11b-GLUE_LORA](https://huggingface.co/athirdpath/BigMistral-11b-GLUE_LORA)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-instruct-wordcel-smutrom-8k_peft](https://huggingface.co/jspr/llama3-instruct-wordcel-smutrom-8k_peft)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/llama3-wordcel-smutrom_peft](https://huggingface.co/jspr/llama3-wordcel-smutrom_peft)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jrahn/llama-3-8b-claudstruct-v3](https://huggingface.co/jrahn/llama-3-8b-claudstruct-v3)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [jspr/smut_llama_8b_32k_peft_ax](https://huggingface.co/jspr/smut_llama_8b_32k_peft_ax)
* [MrRobotoAI/D4](https://huggingface.co/MrRobotoAI/D4) + [basilePlus/llama3-8b-schopenhauer](https://huggingface.co/basilePlus/llama3-8b-schopenhauer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/D4+athirdpath/BigMistral-11b-GLUE_LORA
- model: MrRobotoAI/D4+automorphic/LORA_20231221_042843_philosophy
- model: MrRobotoAI/D4+basilePlus/llama3-8b-schopenhauer
- model: MrRobotoAI/D4+Blackroot/Llama-3-LongStory-LORA
- model: MrRobotoAI/D4+Chat-Error/Claude-Kimiko
- model: MrRobotoAI/D4+hannahbillo/dpo-llama3-8b-grammar-rules
- model: MrRobotoAI/D4+ian00000/Llama-3-8B_offensive_CoT_finetuned
- model: MrRobotoAI/D4+jrahn/llama-3-8b-claudstruct-v3
- model: MrRobotoAI/D4+jspr/llama3-instruct-wordcel-smutrom_peft
- model: MrRobotoAI/D4+jspr/llama3-instruct-wordcel-smutrom-8k_peft
- model: MrRobotoAI/D4+jspr/llama3-wordcel-smutrom_peft
- model: MrRobotoAI/D4+jspr/llama3-wordcel-smutrom-reorder_peft
- model: MrRobotoAI/D4+jspr/smut_llama_8b_32k_peft_ax
- model: MrRobotoAI/D4+jspr/smut_llama_8b_smut_2k_romance_1k_peft
- model: MrRobotoAI/D4+jspr/smut_llama_8b_smutromance_32k_peft
- model: MrRobotoAI/D4+sardukar/physiology-8k-llama3-8b-qlora
- model: MrRobotoAI/D4+sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA
- model: MrRobotoAI/D4+surya-narayanan/human_sexuality
- model: MrRobotoAI/D4+basilePlus/llama3-8b-schopenhauer
- model: MrRobotoAI/D4+Triangle104/Vulkane_120-Days-of-Sodom-LoRA-Mistral-7b
merge_method: model_stock
base_model: MrRobotoAI/D4
normalize: true
dtype: float16
```
|
gianrp6/reversecow | gianrp6 | 2025-04-28T00:40:56Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-04-28T00:25:28Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/image_fx_.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: mit
---
# reversecow
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/gianrp6/reversecow/tree/main) them in the Files & versions tab.
|
debiao29/task-15-Qwen-Qwen2.5-3B | debiao29 | 2025-04-28T00:34:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"region:us"
] | null | 2024-11-09T10:21:59Z | ---
base_model: Qwen/Qwen2.5-3B
library_name: peft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
pcam-interpretability/resnet50-val08733-resnet-test | pcam-interpretability | 2025-04-28T00:33:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T00:33:39Z | # resnet50
**Best Validation Accuracy:** `0.8733`
## Metadata
- **Model Name**: `resnet50`
- **Optimizer**: `adamw`
- **Scheduler**: `cosine`
- **Weight Decay**: `0.05`
- **Warmup Epochs**: `3`
- **Patience**: `5`
- **Amp**: `True`
- **Seed**: `42`
- **Batch Size**: `32`
- **Initial Lr**: `7.5e-05`
- **Total Epochs Ran**: `1`
- **Early Stopped**: `False`
- **Training Time Seconds**: `2375.9705023765564`
- **Num Parameters**: `22067009`
- **Device**: `NVIDIA A100-SXM4-40GB`
- **Run Id**: `resnet-test`
## Training Configuration
- Epochs: `1`
- Batch size: `32`
- Learning rate (initial): `7.5e-05`
## Training Logs (Per Epoch)
| Epoch | Train Loss | Train Acc | Val Loss | Val Acc | LR |
|-------|------------|-----------|----------|---------|----|
| 1 | 0.2911 | 0.8787 | 0.3048 | 0.8733 | 0.000025 |
|
StephenGenusa/Skywork-VL-Reward-7B-IQ4_NL-GGUF | StephenGenusa | 2025-04-28T00:31:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Multimodal Reward Model",
"Reward Model",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:Skywork/Skywork-VL-Reward-7B",
"base_model:quantized:Skywork/Skywork-VL-Reward-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2025-04-28T00:31:01Z | ---
base_model: Skywork/Skywork-VL-Reward-7B
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
tags:
- Multimodal Reward Model
- Reward Model
- llama-cpp
- gguf-my-repo
---
# StephenGenusa/Skywork-VL-Reward-7B-IQ4_NL-GGUF
This model was converted to GGUF format from [`Skywork/Skywork-VL-Reward-7B`](https://huggingface.co/Skywork/Skywork-VL-Reward-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Skywork/Skywork-VL-Reward-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo StephenGenusa/Skywork-VL-Reward-7B-IQ4_NL-GGUF --hf-file skywork-vl-reward-7b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo StephenGenusa/Skywork-VL-Reward-7B-IQ4_NL-GGUF --hf-file skywork-vl-reward-7b-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo StephenGenusa/Skywork-VL-Reward-7B-IQ4_NL-GGUF --hf-file skywork-vl-reward-7b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo StephenGenusa/Skywork-VL-Reward-7B-IQ4_NL-GGUF --hf-file skywork-vl-reward-7b-iq4_nl-imat.gguf -c 2048
```
|
edlynmonica/edlynmonica | edlynmonica | 2025-04-28T00:29:40Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-04-28T00:29:40Z | ---
license: artistic-2.0
---
|
gghfez/Qwen2.5-14B-Base-lm_head-LoRA | gghfez | 2025-04-28T00:22:50Z | 0 | 0 | null | [
"safetensors",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B",
"base_model:finetune:Qwen/Qwen2.5-14B",
"license:apache-2.0",
"region:us"
] | null | 2024-11-27T00:05:32Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-14B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
Trained the eos_token into the lm_head.
This should allow qlora finetunes with 24 or even 16 GB of vram. |
hxyscott/math-decontamination-4.1-mini-error-removed-new-token-15epoch-step21735 | hxyscott | 2025-04-28T00:09:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T00:06:39Z | ---
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hxyscott
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Math-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MrRobotoAI/D4-Q4_K_M-GGUF | MrRobotoAI | 2025-04-28T00:09:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/D4",
"base_model:quantized:MrRobotoAI/D4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T00:08:51Z | ---
base_model: MrRobotoAI/D4
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/D4-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/D4`](https://huggingface.co/MrRobotoAI/D4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/D4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/D4-Q4_K_M-GGUF --hf-file d4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/D4-Q4_K_M-GGUF --hf-file d4-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/D4-Q4_K_M-GGUF --hf-file d4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/D4-Q4_K_M-GGUF --hf-file d4-q4_k_m.gguf -c 2048
```
|
nthehai01/Qwen2.5-7B-Instruct-Math-dare-linear | nthehai01 | 2025-04-28T00:09:02Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2311.03099",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:merge:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-05T03:23:32Z | ---
base_model:
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-7B
- Qwen/Qwen2.5-Math-7B
library_name: transformers
tags:
- mergekit
- merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-7B-Instruct-Math-dare-linear
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Performance
| Metric |Value|
|---------------------------------|----:|
|GSM8k (zero-shot) |90.75|
|HellaSwag (zero-Shot) |80.77|
|MBPP (zero-shot) |63.08|
## Merge Details
### Merge Method
This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
* [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen2.5-7B
dtype: bfloat16
merge_method: dare_linear
parameters:
lambda: 0.7484721287441042
normalize: 1.0
slices:
- sources:
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B
- layer_range: [0, 28]
model: Qwen/Qwen2.5-Math-7B
parameters:
density: 0.8456557088847347
weight: 0.11064925820848412
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B-Instruct
parameters:
density: 0.5247829319933462
weight: 0.6901952279079901
```
|
Paraw/chaserice | Paraw | 2025-04-28T00:08:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T00:08:11Z | ---
license: apache-2.0
---
|
dushj98/aerial-efficientnetv2-m-real-only-newaug-do-cs-test-sft | dushj98 | 2025-04-28T00:07:04Z | 0 | 0 | pytorch | [
"pytorch",
"efficientnetv2",
"image-classification",
"efficientnetv2_m",
"dushj98-aerial_real_only",
"aerial-imagery",
"generated-by-trainer-script",
"en",
"dataset:dushj98/aerial_real_only",
"region:us"
] | image-classification | 2025-04-28T00:06:32Z | ---
language: en
library_name: pytorch
tags:
- image-classification
- pytorch
- efficientnetv2_m
- dushj98-aerial_real_only
- aerial-imagery
- generated-by-trainer-script
datasets:
- dushj98/aerial_real_only
metrics:
- accuracy
- f1
---
# efficientnetv2_m fine-tuned on dushj98/aerial_real_only
This model is a version of `efficientnetv2_m` fine-tuned on the `dushj98/aerial_real_only` dataset for aerial image classification.
## Model Details
- **Model Architecture:** `efficientnetv2_m`
- **Pretrained Weights:** ImageNet (Default)
- **Training Mode:** Selective Fine-Tuning
- **Number of Classes:** 13
- **Input Size:** 224x224
- **Labels:** broadleaved_indigenous_hardwood, deciduous_hardwood, grose_broom, harvested_forest, herbaceous_freshwater_vege, high_producing_grassland, indigenous_forest, lake_pond, low_producing_grassland, manuka_kanuka, shortrotation_cropland, urban_build_up, urban_parkland
## Training Configuration
- **Dataset:** `dushj98/aerial_real_only`
- **Optimizer:** AdamW
- **Learning Rate (Initial):** 1.0E-05
- **Scheduler:** cosine
- **Epochs:** 30 (Target), 5 (Best Epoch)
- **Batch Size:** 16
- **Label Smoothing:** 0.1
- **Dropout:** 0.5
- **Mixed Precision:** True
- **Data Augmentation:** Yes (Enhanced)
- **Normalization Mean:** [0.34016817808151245, 0.40237218141555786, 0.351278156042099]
- **Normalization Std:** [0.10323596745729446, 0.07976034283638, 0.060423050075769424]
- **Seed:** 42
## Performance
- **Best Validation Accuracy:** 0.4148
- **Best Validation F1-Score:** 0.3888
- **Training Time:** 13.9 minutes
*Include plots or tables from training logs (e.g., WandB) if desired.*
## How to Use (`transformers`)
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
import requests
# Define repository ID and load processor/model
repo_id = "{hf_repo_id}"
processor = AutoImageProcessor.from_pretrained(repo_id)
model = AutoModelForImageClassification.from_pretrained(repo_id)
# Example image URL (replace with your image)
# url = "[https://example.com/your_aerial_image.jpg](https://example.com/your_aerial_image.jpg)"
# image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
# Or load from file:
# image = Image.open("path/to/your/image.jpg").convert("RGB")
# --- Placeholder: Load a sample image ---
try:
url = "[http://images.cocodataset.org/val2017/000000039769.jpg](http://images.cocodataset.org/val2017/000000039769.jpg)" # Example COCO image
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
print("Loaded sample image.")
except Exception as e:
print(f"Could not load sample image: {e}. Please provide your own image.")
image = Image.new('RGB', (224, 224), color = 'red') # Dummy image
# Preprocess image
inputs = processor(images=image, return_tensors="pt")
# Make prediction
with torch.no_grad():
logits = model(**inputs).logits
# Get predicted class index and label
predicted_label_idx = logits.argmax(-1).item()
predicted_label = model.config.id2label[predicted_label_idx]
print(f"Predicted class: {predicted_label} (ID: {predicted_label_idx})")
```
## Intended Use & Limitations
This model is intended for classifying aerial images based on the categories present in the `dushj98/aerial_real_only` dataset. Performance may vary on images significantly different from the training distribution. Evaluate carefully for your specific use case. The model inherits limitations from the base `efficientnetv2_m` architecture and ImageNet pretraining (if used). |
zhan1993/Qwen2.5-1.5B-Open-R1-Distill | zhan1993 | 2025-04-28T00:06:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T21:38:59Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zhan1993/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
VergilDante/my_model | VergilDante | 2025-04-28T00:03:54Z | 0 | 0 | null | [
"sentence-similarity",
"ae",
"dataset:nvidia/OpenCodeReasoning",
"base_model:microsoft/bitnet-b1.58-2B-4T",
"base_model:finetune:microsoft/bitnet-b1.58-2B-4T",
"license:apache-2.0",
"region:us"
] | sentence-similarity | 2025-04-28T00:02:47Z | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
language:
- ae
base_model:
- microsoft/bitnet-b1.58-2B-4T
new_version: meta-llama/Llama-4-Scout-17B-16E-Instruct
pipeline_tag: sentence-similarity
--- |
Jongsim/Qwen2.5-72B-Instruct-abliterated-4.2bpw-exl2 | Jongsim | 2025-04-28T00:01:27Z | 12 | 3 | null | [
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-72B-Instruct",
"license:other",
"region:us"
] | text-generation | 2024-10-09T10:54:39Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B-Instruct
tags:
- chat
---
Abliterated version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), utilizing code from [refusal_direction](https://github.com/andyrdt/refusal_direction).
For more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
[GGUF](https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated-GGUF) |
Shuu12121/CodeReranker-Crow | Shuu12121 | 2025-04-27T23:54:16Z | 0 | 0 | null | [
"safetensors",
"modernbert",
"dataset:Shuu12121/multilingual-codesearch-hard-negatives",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T19:27:30Z | ---
license: apache-2.0
datasets:
- Shuu12121/multilingual-codesearch-hard-negatives
--- |
nomadrp/mdpo-th-v17 | nomadrp | 2025-04-27T23:54:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T23:47:24Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: mdpo-th-v17
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for mdpo-th-v17
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/mdpo-th-v17", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cdij/nlp-cw-model-3 | cdij | 2025-04-27T23:47:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-27T23:47:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ankitkumar-patel-179/distilbert_base_uncased_finetuned_clinc | ankitkumar-patel-179 | 2025-04-27T23:46:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T22:57:12Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_base_uncased_finetuned_clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_base_uncased_finetuned_clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7988
- Accuracy: 0.9139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2842 | 0.7239 |
| 3.7874 | 2.0 | 636 | 1.8853 | 0.8442 |
| 3.7874 | 3.0 | 954 | 1.1789 | 0.8916 |
| 1.7109 | 4.0 | 1272 | 0.8840 | 0.9081 |
| 0.9258 | 5.0 | 1590 | 0.7988 | 0.9139 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
jobs-git/OpenMath-Nemotron-32B | jobs-git | 2025-04-27T23:45:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"nvidia",
"math",
"conversational",
"en",
"dataset:nvidia/OpenMathReasoning",
"arxiv:2504.16891",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:45:56Z | ---
license: cc-by-4.0
base_model:
- Qwen/Qwen2.5-32B
datasets:
- nvidia/OpenMathReasoning
language:
- en
tags:
- nvidia
- math
library_name: transformers
---
# OpenMath-Nemotron-32B
OpenMath-Nemotron-32B is created by finetuning [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) on [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) dataset.
This model is ready for commercial use.

OpenMath-Nemotron models achieve state-of-the-art results on popular mathematical benchmarks. We present metrics as pass@1 (maj@64) where pass@1
is an average accuracy across 64 generations and maj@64 is the result of majority voting.
Please see our [paper](https://arxiv.org/abs/2504.16891) for more details on the evaluation setup.
| Model | AIME24 | AIME25 | HMMT-24-25 | HLE-Math |
|-------------------------------|-----------------|-------|-------|-------------|
| DeepSeek-R1-Distill-Qwen-1.5B | 26.8 (60.0) | 21.4 (36.7) | 14.2 (26.5) | 2.9 (5.0) |
| [OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B) CoT | 61.6 (80.0) | 49.5 (66.7) | 39.9 (53.6) | 5.4 (5.4) |
| [OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B) TIR | 52.0 (83.3) | 39.7 (70.0) | 37.2 (60.7) | 2.5 (6.2) |
| + Self GenSelect | 83.3 | 70.0 | 62.2 | 7.9 |
| + 32B GenSelect | 83.3 | 70.0 | 62.8 | 8.3 |
| DeepSeek-R1-Distill-Qwen-7B | 54.4 (80.0) | 38.6 (53.3) | 30.6 (42.9) | 3.3 (5.2) |
| [OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B) CoT | 74.8 (80.0) | 61.2 (76.7) | 49.7 (57.7) | 6.6 (6.6) |
| [OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B) TIR | 72.9 (83.3) | 57.5 (76.7) | 54.6 (66.3) | 7.8 (10.8) |
| + Self GenSelect | 86.7 | 76.7 | 68.4 | 11.5 |
| + 32B GenSelect | 86.7 | 76.7 | 69.9 | 11.9 |
| DeepSeek-R1-Distill-Qwen-14B | 65.8 (80.0) | 48.4 (60.0) | 40.1 (52.0) | 4.2 (4.8) |
| [OpenMath-Nemotron-14B-MIX (kaggle)](https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle) | 73.7 (86.7) | 57.9 (73.3) | 50.5 (64.8) | 5.7 (6.5) |
| [OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B) CoT | 76.3 (83.3) | 63.0 (76.7) | 52.1 (60.7) | 7.5 (7.6) |
| [OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B) TIR | 76.3 (86.7) | 61.3 (76.7) | 58.6 (70.9) | 9.5 (11.5) |
| + Self GenSelect | 86.7 | 76.7 | 72.4 | 14.1 |
| + 32B GenSelect | 90.0 | 76.7 | 71.9 | 13.7 |
| QwQ-32B | 78.1 (86.7) | 66.5 (76.7) | 55.9 (63.3) | 9.0 (9.5) |
| DeepSeek-R1-Distill-Qwen-32B | 66.9 (83.3) | 51.8 (73.3) | 39.9 (51.0) | 4.8 (6.0) |
| [OpenMath-Nemotron-32B](https://huggingface.co/nvidia/OpenMath-Nemotron-32B) CoT | 76.5 (86.7) | 62.5 (73.3) | 53.0 (59.2) | 8.3 (8.3) |
| [OpenMath-Nemotron-32B](https://huggingface.co/nvidia/OpenMath-Nemotron-32B) TIR | 78.4 (93.3) | 64.2 (76.7) | 59.7 (70.9) | 9.2 (12.5) |
| + Self GenSelect | 93.3 | 80.0 | 73.5 | 15.7 |
| DeepSeek-R1 | 79.1 (86.7) | 64.3 (73.3) | 53.0 (59.2) | 10.5 (11.4) |
We used [a version of OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle) model to secure
the first place in [AIMO-2 Kaggle competition](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/leaderboard)!
## Reproducing our results
The pipeline we used to produce the data and models is fully open-sourced!
- [Code](https://github.com/NVIDIA/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmathreasoning-68072c0154a5099573d2e730)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathReasoning)
- [Paper](https://arxiv.org/abs/2504.16891)
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
to fully reproduce our results, including data generation.
# How to use the models?
Our models can be used in 3 inference modes: chain-of-thought (CoT), tool-integrated reasoning (TIR) and generative solution selection (GenSelect).
To run inference with CoT mode, you can use this example code snippet.
```python
import transformers
import torch
model_id = "nvidia/OpenMath-Nemotron-32B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{
"role": "user",
"content": "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.\n\n" +
"What is the minimum value of $a^2+6a-7$?"},
]
outputs = pipeline(
messages,
max_new_tokens=4096,
)
print(outputs[0]["generated_text"][-1]['content'])
```
To run inference with TIR or GenSelect modes, we highly recommend to use our
[reference implementation in NeMo-Skills](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/evaluation/).
Please note that these models have not been instruction tuned on general data and thus might not provide good answers outside of math domain.
## Citation
If you find our work useful, please consider citing us!
```bibtex
@article{moshkov2025aimo2,
title = {AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning dataset},
author = {Ivan Moshkov and Darragh Hanley and Ivan Sorokin and Shubham Toshniwal and Christof Henkel and Benedikt Schifferer and Wei Du and Igor Gitman},
year = {2025},
journal = {arXiv preprint arXiv:2504.16891}
}
```
## Additional information
### License/Terms of Use: <br>
GOVERNING TERMS: Use of this model is governed by [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode.en).
Additional Information: [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B/blob/main/LICENSE).
### Deployment Geography:
Global <br>
### Use Case: <br>
This model is intended to facilitate research in the area of mathematical reasoning.
### Release Date: <br>
Huggingface 04/23/2025 <br>
## Model Architecture: <br>
**Architecture Type:** Transformer decoder-only language model <br>
**Network Architecture:** Qwen2.5 <br>
**This model was developed based on Qwen2.5-1.5B <br>
** This model has 1.5B of model parameters. <br>
## Input: <br>
**Input Type(s):** Text <br>
**Input Format(s):** String <br>
**Input Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Input:** Context length up to 131,072 tokens <br>
## Output: <br>
**Output Type(s):** Text <br>
**Output Format:** String <br>
**Output Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Output:** Context length up to 131,072 tokens <br>
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
## Software Integration : <br>
**Runtime Engine(s):** <br>
* Tensor RT / Triton <br>
**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere <br>
* NVIDIA Hopper <br>
**Preferred Operating System(s):** <br>
* Linux <br>
## Model Version(s):
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
[OpenMath-Nemotron-7B](https://huggingface.co/nvidia/OpenMath-Nemotron-7B)
[OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B)
[OpenMath-Nemotron-32B](https://huggingface.co/nvidia/OpenMath-Nemotron-32B)
# Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY.md), and [Privacy](./PRIVACY.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). |
mradermacher/UwU-7B-Instruct-i1-GGUF | mradermacher | 2025-04-27T23:44:57Z | 91 | 0 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:qingy2024/FineQwQ-142k",
"base_model:qingy2024/UwU-7B-Instruct",
"base_model:quantized:qingy2024/UwU-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-07T10:32:36Z | ---
base_model: qingy2024/UwU-7B-Instruct
datasets:
- qingy2024/FineQwQ-142k
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/qingy2024/UwU-7B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/UwU-7B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/UwU-7B-Instruct-i1-GGUF/resolve/main/UwU-7B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlfoundations-dev/c1_science_0d_32s_3k | mlfoundations-dev | 2025-04-27T23:44:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:42:15Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_0d_32s_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_0d_32s_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_0d_32s_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 96
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
llamafy/Qwen-Qwen2.5-1.5B-llamafied | llamafy | 2025-04-27T23:38:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-16T21:29:52Z | ---
base_model: Qwen/Qwen2.5-1.5B
pipeline_tag: text-generation
library_name: transformers
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen/Qwen2.5-1.5B (llamafied)
This is a version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) converted to the Llama format. It should be compatible with all programs that support Llama. |
HAMMALE/speecht5-darija | HAMMALE | 2025-04-27T23:35:39Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"speecht5",
"region:us"
] | null | 2025-04-27T23:15:57Z | # Moroccan Darija Text-to-Speech Model
This model is a fine-tuned version of SpeechT5 for Moroccan Darija Text-to-Speech synthesis.
## Model Details
- **Base Model**: Microsoft SpeechT5
- **Fine-tuned on**: DODa audio dataset
- **Languages**: Moroccan Darija (Latin script)
- **Features**: Multiple voice support (male/female)
- **Release Date**: April 2025
## Usage
```python
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
import torch
import soundfile as sf
# Load models
processor = SpeechT5Processor.from_pretrained("HAMMALE/speecht5-darija")
model = SpeechT5ForTextToSpeech.from_pretrained("HAMMALE/speecht5-darija")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
# Load speaker embedding (replace with your own speaker embedding)
speaker_embedding = torch.randn(1, 512) # Example embedding
# Process text
text = "Salam, kifach nta lyoum?"
inputs = processor(text=text, return_tensors="pt")
# Generate speech
speech = model.generate_speech(inputs["input_ids"], speaker_embedding, vocoder=vocoder)
# Save audio file
sf.write("output.wav", speech.numpy(), 16000)
```
## Demo
A live demo is available at [Hugging Face Spaces](https://huggingface.co/spaces/HAMMALE/darija-tts-demo)
## License
This model is available under the MIT License.
## Acknowledgments
- The [DODa audio dataset](https://huggingface.co/datasets/atlasia/DODa-audio-dataset) creators
- Microsoft Research for the SpeechT5 model architecture
|
juanbovier/caller_model | juanbovier | 2025-04-27T23:30:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T23:30:29Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** juanbovier
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
waowao/bitnet-jacommon-sense-qa-lora | waowao | 2025-04-27T23:26:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bitnet",
"text-generation",
"custom_code",
"ja",
"dataset:sbintuitions/JCommonsenseQA",
"base_model:microsoft/bitnet-b1.58-2B-4T-bf16",
"base_model:quantized:microsoft/bitnet-b1.58-2B-4T-bf16",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-04-27T22:55:47Z | ---
library_name: transformers
license: cc-by-sa-4.0
datasets:
- sbintuitions/JCommonsenseQA
language:
- ja
base_model:
- microsoft/bitnet-b1.58-2B-4T-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
---
library_name: transformers
license: cc-by-sa-4.0
datasets:
- sbintuitions/JCommonsenseQA
language:
- ja
base_model:
- microsoft/bitnet-b1.58-2B-4T-bf16
---
## Model Details
microsoftのbitnet-b1をJCommonsenseQAでLoRAチューニングしたものです。未コンパイルなので、bf16のまま動作しています。
## Test
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "主に子ども向けのもので、イラストのついた物語が書かれているものは?"},
]
### Before Tuning
この質問は、英文の言語で="In the end, it is stories about children that make me write manga."と言えます。
### After Tuning
0 den ishani, SHAKA, SU, OC, ANIMAL, MUSEUM, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL
INTO, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL, ANIMAL
ANIMAL, ANIMAL, ANIMAL, ANIMAL
## Result
チューニングが甘かったため、むしろ言語能力が下がってしまいました。 |
KaranKKR/Llama-3.2-3B-float16-finetuned | KaranKKR | 2025-04-27T23:26:06Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:20:04Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KaranKKR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/CodeLlama-13b-hf-4bit-mlx | mlx-community | 2025-04-27T23:22:39Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"llama-2",
"text-generation",
"code",
"base_model:codellama/CodeLlama-13b-hf",
"base_model:quantized:codellama/CodeLlama-13b-hf",
"license:llama2",
"4-bit",
"region:us"
] | text-generation | 2025-04-27T23:15:54Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
base_model: codellama/CodeLlama-13b-hf
library_name: mlx
---
# mlx-community/CodeLlama-13b-hf-4bit-mlx
This model [mlx-community/CodeLlama-13b-hf-4bit-mlx](https://huggingface.co/mlx-community/CodeLlama-13b-hf-4bit-mlx) was
converted to MLX format from [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-13b-hf-4bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
dgambettaphd/M_llm2_gen2_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-27T23:17:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T23:17:40Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SeoulX/mbert-intent-classifier | SeoulX | 2025-04-27T23:17:19Z | 0 | 0 | transformers | [
"transformers",
"joblib",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T23:08:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MoyYuan/DeductiveReasoning-wd-all | MoyYuan | 2025-04-27T23:13:30Z | 0 | 0 | null | [
"pytorch",
"bert",
"en",
"dataset:MoyYuan/DeductiveReasoning",
"license:mit",
"region:us"
] | null | 2025-04-27T23:07:28Z | ---
license: mit
datasets:
- MoyYuan/DeductiveReasoning
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/DeductiveReasoning for README information. |
qingy2024/UwU-7B-Instruct | qingy2024 | 2025-04-27T23:11:32Z | 15 | 30 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:qingy2024/FineQwQ-142k",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-12-31T14:22:02Z | ---
license: apache-2.0
datasets:
- qingy2024/FineQwQ-142k
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
---
## UwU 7B Instruct
Small QwQ, full-finetuned on FineQwQ-142K. Unlike my previous models, this one is a general-purpose reasoning machine!
Oh, and it also passes the strawberry test:
```
> How many 'r's are in the word strawberry?
Let's see. I have this word: "strawberry." I need to find out how many 'r's are in it. Okay, first, I should probably write down the word and look at each letter one by one.
So, "s-t-r-a-w-b-e-r-r-y."
Now, I need to count the 'r's. Let's go through each letter:
- The first letter is 's', which is not an 'r'.
- The second letter is 't', not an 'r'.
- The third letter is 'r', that's the first 'r'.
- The fourth letter is 'a', not an 'r'.
- The fifth letter is 'w', not an 'r'.
- The sixth letter is 'b', not an 'r'.
- The seventh letter is 'e', not an 'r'.
- The eighth letter is 'r', that's the second 'r'.
- The ninth letter is 'r', that's the third 'r'.
- The tenth letter is 'y', not an 'r'.
So, I have three 'r's in the word "strawberry." But wait, let me double-check. Maybe I missed one.
Let me count again:
1. 's'
2. 't'
3. 'r' (first 'r')
4. 'a'
5. 'w'
6. 'b'
7. 'e'
8. 'r' (second 'r')
9. 'r' (third 'r')
10. 'y'
Yes, that's three 'r's. I think that's correct.
Alternatively, maybe there's a smarter way to count them without listing out each letter. Let's see.
I know that the word "strawberry" has 10 letters, and I can recall that it has three 'r's. But to be sure, maybe I can look for patterns or something.
Another way is to use a tally mark or something similar. Let's try that.
First 'r': | (that's one)
Second 'r': | (that's two)
Third 'r': | (that's three)
Yes, that also gives me three 'r's.
I think I'm confident with that answer now. There are three 'r's in the word "strawberry."
**Final Answer**
\[ \boxed{3} \]
``` |
qingy2024/Qwen2.5-4B | qingy2024 | 2025-04-27T23:11:20Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-03T19:39:31Z | ---
base_model:
- Qwen/Qwen2.5-3B
library_name: transformers
tags:
- mergekit
- merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 6]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [4, 12]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [10, 18]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [16, 24]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [22, 30]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [28, 36]
model: Qwen/Qwen2.5-3B
merge_method: passthrough
dtype: bfloat16
```
|
ikerm11/gemma1b_humanizer_lora | ikerm11 | 2025-04-27T23:10:40Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-04-26T03:56:09Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="ikerm11/gemma1b_humanizer_lora")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("ikerm11/gemma1b_humanizer_lora")
model = AutoModelForCausalLMWithValueHead.from_pretrained("ikerm11/gemma1b_humanizer_lora")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
fedovtt/03bffe26-9fff-4036-b3a8-3b6edd82b213 | fedovtt | 2025-04-27T23:10:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-27T22:46:51Z | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 03bffe26-9fff-4036-b3a8-3b6edd82b213
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 42e2462092e5ffc1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/42e2462092e5ffc1_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: accepted
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/03bffe26-9fff-4036-b3a8-3b6edd82b213
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/42e2462092e5ffc1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66888fb8-bc1d-4af3-94d4-270c1e03d895
wandb_project: s56-1
wandb_run: your_name
wandb_runid: 66888fb8-bc1d-4af3-94d4-270c1e03d895
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 03bffe26-9fff-4036-b3a8-3b6edd82b213
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4421 | 0.0691 | 200 | 1.5309 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/c1_code_nod_16s_0.3k | mlfoundations-dev | 2025-04-27T23:10:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:28:43Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_nod_16s_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_nod_16s_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_nod_16s_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
KaranKKR/Llama-3.2-3B-base_tockenizer-finetuned | KaranKKR | 2025-04-27T23:09:33Z | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T23:09:28Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KaranKKR/Llama-3.2-3B-base_model-finetuned | KaranKKR | 2025-04-27T23:09:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T23:09:07Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KaranKKR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jncraton/Qwen2.5-0.5B-Instruct-ct2-int8 | jncraton | 2025-04-27T23:08:31Z | 8 | 0 | transformers | [
"transformers",
"chat",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-21T00:28:18Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
base_model_relation: quantized
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
AICAM123456/AICAM | AICAM123456 | 2025-04-27T23:04:35Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
Khanhchi1989P/KHANHCHI | Khanhchi1989P | 2025-04-27T23:04:35Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
KieuM5601/KIEUM | KieuM5601 | 2025-04-27T23:04:34Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
ChuManh535/CHUMANH | ChuManh535 | 2025-04-27T23:04:34Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-27T23:04:34Z | ---
license: bigscience-openrail-m
---
|
Volko76/Qwen2.5-Coder-3B-GGUF | Volko76 | 2025-04-27T23:03:06Z | 59 | 0 | transformers | [
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"autoquant",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2409.12186",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-20T09:26:17Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
- autoquant
- gguf
---
# Qwen2.5-Coder-3B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
**This repo contains the 3B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
AndresR2909/Phi-4-unsloth-bnb-4bit_gguf | AndresR2909 | 2025-04-27T23:02:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T22:38:25Z | ---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AndresR2909
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dushj98/aerial-mobilenet-v2-real-only-newaug-do-cs-test-sft | dushj98 | 2025-04-27T22:53:35Z | 0 | 0 | pytorch | [
"pytorch",
"mobilenet",
"image-classification",
"mobilenet_v2",
"dushj98-aerial_real_only",
"aerial-imagery",
"generated-by-trainer-script",
"en",
"dataset:dushj98/aerial_real_only",
"region:us"
] | image-classification | 2025-04-27T22:53:30Z | ---
language: en
library_name: pytorch
tags:
- image-classification
- pytorch
- mobilenet_v2
- dushj98-aerial_real_only
- aerial-imagery
- generated-by-trainer-script
datasets:
- dushj98/aerial_real_only
metrics:
- accuracy
- f1
---
# mobilenet_v2 fine-tuned on dushj98/aerial_real_only
This model is a version of `mobilenet_v2` fine-tuned on the `dushj98/aerial_real_only` dataset for aerial image classification.
## Model Details
- **Model Architecture:** `mobilenet_v2`
- **Pretrained Weights:** ImageNet (Default)
- **Training Mode:** Selective Fine-Tuning
- **Number of Classes:** 13
- **Input Size:** 224x224
- **Labels:** broadleaved_indigenous_hardwood, deciduous_hardwood, grose_broom, harvested_forest, herbaceous_freshwater_vege, high_producing_grassland, indigenous_forest, lake_pond, low_producing_grassland, manuka_kanuka, shortrotation_cropland, urban_build_up, urban_parkland
## Training Configuration
- **Dataset:** `dushj98/aerial_real_only`
- **Optimizer:** AdamW
- **Learning Rate (Initial):** 2.0E-05
- **Scheduler:** cosine
- **Epochs:** 30 (Target), 28 (Best Epoch)
- **Batch Size:** 64
- **Label Smoothing:** 0.1
- **Dropout:** 0.5
- **Mixed Precision:** True
- **Data Augmentation:** Yes (Enhanced)
- **Normalization Mean:** [0.34016817808151245, 0.40237218141555786, 0.351278156042099]
- **Normalization Std:** [0.10323596745729446, 0.07976034283638, 0.060423050075769424]
- **Seed:** 42
## Performance
- **Best Validation Accuracy:** 0.3772
- **Best Validation F1-Score:** 0.3462
- **Training Time:** 21.3 minutes
*Include plots or tables from training logs (e.g., WandB) if desired.*
## How to Use (`transformers`)
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
import requests
# Define repository ID and load processor/model
repo_id = "{hf_repo_id}"
processor = AutoImageProcessor.from_pretrained(repo_id)
model = AutoModelForImageClassification.from_pretrained(repo_id)
# Example image URL (replace with your image)
# url = "[https://example.com/your_aerial_image.jpg](https://example.com/your_aerial_image.jpg)"
# image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
# Or load from file:
# image = Image.open("path/to/your/image.jpg").convert("RGB")
# --- Placeholder: Load a sample image ---
try:
url = "[http://images.cocodataset.org/val2017/000000039769.jpg](http://images.cocodataset.org/val2017/000000039769.jpg)" # Example COCO image
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
print("Loaded sample image.")
except Exception as e:
print(f"Could not load sample image: {e}. Please provide your own image.")
image = Image.new('RGB', (224, 224), color = 'red') # Dummy image
# Preprocess image
inputs = processor(images=image, return_tensors="pt")
# Make prediction
with torch.no_grad():
logits = model(**inputs).logits
# Get predicted class index and label
predicted_label_idx = logits.argmax(-1).item()
predicted_label = model.config.id2label[predicted_label_idx]
print(f"Predicted class: {predicted_label} (ID: {predicted_label_idx})")
```
## Intended Use & Limitations
This model is intended for classifying aerial images based on the categories present in the `dushj98/aerial_real_only` dataset. Performance may vary on images significantly different from the training distribution. Evaluate carefully for your specific use case. The model inherits limitations from the base `mobilenet_v2` architecture and ImageNet pretraining (if used). |
mlfoundations-dev/c1_science_0d_16s_1k | mlfoundations-dev | 2025-04-27T22:51:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:22:44Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_science_0d_16s_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_science_0d_16s_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_science_0d_16s_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Alijeff1214/DeutscheLexAI_BGB_2.0 | Alijeff1214 | 2025-04-27T22:50:53Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"grpo",
"LLM",
"BGB",
"German",
"AI",
"DeepLearning",
"ReinforcementLearning",
"MachineLearning",
"question-answering",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-03-29T01:00:56Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
- LLM
- BGB
- German
- transformers
- AI
- DeepLearning
- ReinforcementLearning
- MachineLearning
license: apache-2.0
base_model:
- Qwen/Qwen2.5-3B
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: question-answering
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
DeutscheLexAI_BGB_2,0 is a fine-tuned Qwen2.5-3B model with more training and accurate version with output context length upto 500 tokens specializing in German legal text processing, trained on the Bürgerliches Gesetzbuch (BGB) dataset. It enhances legal text understanding, summarization, and reasoning for German legal documents.
- **Developed by:** [Ali Asghar ([email protected])]
- **Funded by [optional]:** [still under progress ]
- **Shared by [optional]:** []
- **Model type:** [Large Language Model (LLM)]
- **Language(s) (NLP):** [pytorch,transformers,python]
- **License:** [Appache 2.0]
- **Finetuned from model [optional]:** [Qwen2.5-3B]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/Alijeff1214/DeutscheLexAI_BGB_2.0/tree/main
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
DeutscheLexAI_BGB is a fine-tuned Qwen2.5-3B model specializing in German legal text processing, trained on the Bürgerliches Gesetzbuch (BGB) dataset. It enhances legal text understanding, summarization, and reasoning for German legal documents.
### Direct Use
Legal research: Extract, summarize, and analyze BGB texts.
AI-powered legal assistants: Provide insights into German law.
Academic purposes: Assists in legal document structuring.
[More Information Needed]
### Downstream Use [optional]
Chatbots for legal guidance.
AI-based contract analysis.
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
The model may reflect biases in the BGB dataset.
Not suitable for real-time legal decision-making.
Might struggle with non-German legal texts.
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed]
- trainer = GRPOTrainer(
model = model,
processing_class = tokenizer,
reward_funcs = [
xmlcount_reward_func,
soft_format_reward_func,
strict_format_reward_func,
int_reward_func,
correctness_reward_func,
],
args = training_args,
train_dataset = dataset,
)
trainer.train()
### Test on HF Space
https://huggingface.co/spaces/Alijeff1214/DeutecheLexAI_BGB
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
@article{DeutscheLexAI_BGB,
title={DeutscheLexAI_BGB: A Fine-Tuned Qwen2.5-3B Model for German Legal Texts},
author={Your Name or Organization},
journal={Hugging Face Model Hub},
year={2025},
url={https://huggingface.co/Alijeff1214/DeutscheLexAI_BGB_2.0}
}
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
Ali Asghar
## Model Card Contact
[email protected] |
Panchovix/Llama-3_1-Nemotron-Ultra-253B-v1-3.25bpw-h6-exl3 | Panchovix | 2025-04-27T22:37:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nemotron-nas",
"text-generation",
"nvidia",
"llama-3",
"pytorch",
"conversational",
"custom_code",
"en",
"arxiv:2503.18908",
"arxiv:2502.00203",
"arxiv:2411.19146",
"license:other",
"autotrain_compatible",
"exl3",
"region:us"
] | text-generation | 2025-04-27T21:59:34Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
---
EXL3 quant for 3.25BPW. It fits into 128GB, with semi-decent context.
# Llama-3.1-Nemotron-Ultra-253B-v1
## Model Overview

Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) which is a derivative of [Meta Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) (AKA the *reference model*). It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. The model supports a context length of 128K tokens. This model fits on a single 8xH100 node for inference.
Llama-3.1-Nemotron-Ultra-253B-v1 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the model’s memory footprint, enabling larger workloads, as well as reducing the number of GPUs required to run the model in a data center environment. This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff. Furthermore, by using a novel method to vertically compress the model (see details [here](https://arxiv.org/abs/2503.18908)), it also offers a significant improvement in latency.
The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, Chat, and Tool Calling as well as multiple reinforcement learning (RL) stages using Group Relative Policy Optimization (GRPO) algorithms for reasoning, chat, and instruction-following.
This model is ready for commercial use.
For more details on how the model was trained, please see [this blog](https://developer.nvidia.com/blog/build-enterprise-ai-agents-with-advanced-open-nvidia-llama-nemotron-reasoning-models/).

This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here:
- [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1)
- [Llama-3.3-Nemotron-Super-49B-v1](https://huggingface.co/nvidia/Llama-3\_3-Nemotron-Super-49B-v1)
## License/Terms of Use
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) Additional Information: [Llama 3.1 Community License Agreement](https://www.llama.com/llama3\_1/license/). Built with Llama.
**Model Developer:** NVIDIA
**Model Dates:** Trained between November 2024 and April 2025
**Data Freshness:** The pretraining data has a cutoff of 2023 per Llama-3.1-405B-Instruct
### Use Case:
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks.
### Release Date:
2025-04-07
## References
* [\[2502.00203\] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment](https://arxiv.org/abs/2502.00203)
* [\[2411.19146\]Puzzle: Distillation-Based NAS for Inference-Optimized LLMs](https://arxiv.org/abs/2411.19146)
* [\[2503.18908\]FFN Fusion: Rethinking Sequential Computation in Large Language Models](https://arxiv.org/abs/2503.18908)
## Model Architecture
**Architecture Type:** Dense decoder-only Transformer model
**Network Architecture:** Llama-3.1-405B-Instruct, customized through Neural Architecture Search (NAS)
**This model was developed based on Llama-3.1-405B-Instruct <br>
** This model has 253B model parameters. <br>
The model is a derivative of Llama 3.1-405B-Instruct, using Neural Architecture Search (NAS). The NAS algorithm results in non-standard and non-repetitive blocks. This includes the following:
* Skip attention: In some blocks, the attention is skipped entirely, or replaced with a single linear layer.
* Variable FFN: The expansion/compression ratio in the FFN layer is different between blocks.
* FFN Fusion: When several consecutive attention layers are skipped, which can result in a sequence of multiple FFNs, that sequence of FFNs are fused into a smaller number of wider FFN layers.
For each block of the reference model, we create multiple variants providing different tradeoffs of quality vs. computational complexity, discussed in more depth below. We then search over the blocks to create a model which meets the required throughput and memory while minimizing the quality degradation. To recover performance, the model initially undergoes knowledge distillation (KD) for 65 billion tokens. This is followed by a continual pretraining (CPT) phase for 88 billion tokens.
## Intended use
Llama-3.1-Nemotron-Ultra-253B-v1 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported.
## Input
- **Input Type:** Text
- **Input Format:** String
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** Context length up to 131,072 tokens
## Output
- **Output Type:** Text
- **Output Format:** String
- **Output Parameters:** One-Dimensional (1D)
- **Other Properties Related to Output:** Context length up to 131,072 tokens
## Software Integration
- **Runtime Engine:** Transformers
- **Recommended Hardware Microarchitecture Compatibility:**
- NVIDIA Hopper
- NVIDIA Ampere
- **Preferred Operating System(s):** Linux
## Model Version
1.0 (4/7/2025)
## Quick Start and Usage Recommendations:
1. Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt
2. We recommend setting temperature to \`0.6\`, and Top P to \`0.95\` for Reasoning ON mode
3. We recommend using greedy decoding (temperature 0\) for Reasoning OFF mode
4. We do not recommend to add additional system prompts besides the control prompt, all instructions should be put into user query
5. We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required
You can try this model out through the preview API, using this link: [Llama-3\_1-Nemotron-Ultra-253B-v1](https://build.nvidia.com/nvidia/llama-3\_1-nemotron-ultra-253b-v1).
### Use It with Transformers
See the snippet below for usage with [Hugging Face Transformers](https://huggingface.co/docs/transformers/main/en/index) library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below
We recommend using the *transformers* package with version 4.48.3.
Example of reasoning on:
```py
import torch
import transformers
model_id = "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
**model_kwargs
)
thinking = "on"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
Example of reasoning off:
```py
import torch
import transformers
model_id = "nvidia/Llama-3_1-Nemotron-ULtra-253B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
do_sample=False,
**model_kwargs
)
thinking = "off"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
### Use It with vLLM
```
pip install vllm==0.8.3
```
An example on how to serve with vLLM:
```
python3 -m vllm.entrypoints.openai.api_server \
--model "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" \
--trust-remote-code \
--seed=1 \
--host="0.0.0.0" \
--port=5000 \
--served-model-name "nvidia/Llama-3_1-Nemotron-Ultra-253B-v1" \
--tensor-parallel-size=8 \
--max-model-len=32768 \
--gpu-memory-utilization 0.95 \
--enforce-eager
```
## Inference:
**Engine:**
- Transformers
**Test Hardware:**
- BF16:
- 8x NVIDIA H100-80GB
- 4x NVIDIA B100
- FP 8
- 4x NVIDIA H100-80GB
## Training and Evaluation Datasets
## Training Datasets
A large variety of training data was used for the knowledge distillation phase before post-training pipeline, 3 of which included: FineWeb, Buzz-V1.2, and Dolma.
The data for the multi-stage post-training phases is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model.
Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both reasoning on and off modes, to train the model to distinguish between two modes. This model was improved with Qwen.
We have released our [Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) to promote openness and transparency in model development and improvement.
**Data Collection for Training Datasets:**
- Hybrid: Automated, Human, Synthetic
**Data Labeling for Training Datasets:**
- Hybrid: Automated, Human, Synthetic
## Evaluation Datasets
We used the datasets listed in the next section to evaluate Llama-3.1-Nemotron-Ultra-253B-v1.
Data Collection for Evaluation Datasets:
- Hybrid: Human/Synthetic
Data Labeling for Evaluation Datasets:
- Hybrid: Human/Synthetic/Automatic
## Evaluation Results
*These results contain both Reasoning On, and Reasoning Off. We recommend using temperature=\`0.6\`, top\_p=\`0.95\` for Reasoning On mode, and greedy decoding for Reasoning Off mode. All evaluations are done with 32k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.*
> NOTE: Where applicable, a Prompt Template will be provided. While completing benchmarks, please ensure that you are parsing for the correct output format as per the provided prompt in order to reproduce the benchmarks seen below.
### GPQA
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 56.60 |
| Reasoning On | 76.01 |
User Prompt Template:
```
"What is the correct answer to this question: {question}\nChoices:\nA. {option_A}\nB. {option_B}\nC. {option_C}\nD. {option_D}\nLet's think step by step, and put the final answer (should be a single letter A, B, C, or D) into a \boxed{}"
```
### AIME25
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 16.67 |
| Reasoning On | 72.50 |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
### BFCL V2 Live
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 73.62 |
| Reasoning On | 74.10 |
User Prompt Template:
```
You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function,
also point it out. You should only return the function call in tools call sections.
If you decide to invoke any of the function(s), you MUST put it in the format of <TOOLCALL>[func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]</TOOLCALL>
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.
<AVAILABLE_TOOLS>{functions}</AVAILABLE_TOOLS>
{user_prompt}
```
### LiveCodeBench (20240801-20250201)
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 29.03 |
| Reasoning On | 66.31 |
User Prompt Template (without starter code):
````
"You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
Question: {prompt}
Read the inputs from stdin solve the problem and write the answer to stdout (do not directly test on the sample inputs). Enclose your code within delimiters as follows. Ensure that when the python program runs, it reads the inputs, runs the algorithm and writes output to STDOUT.
```python
# YOUR CODE HERE
```
````
User Prompt Template (with starter code):
````
You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
Question: {prompt}
You will use the following starter code to write the solution to the problem and enclose your code within delimiters.
```python
{starter_code}
```
````
### IFEval
| Reasoning Mode | Strict:Instruction |
|--------------|------------|
| Reasoning Off | 88.85 |
| Reasoning On | 89.45 |
### MATH500
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 80.40 |
| Reasoning On | 97.00 |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY_and_SECURITY.md), and [Privacy](./PRIVACY.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). |
Subsets and Splits