modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
tugaa/PaJpRAG_system | tugaa | 2025-06-01T05:32:54Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"text-generation-inference",
"RAG",
"japanese",
"faiss",
"gradio",
"ja",
"dataset:izumi-lab/oscar2301-ja-filter-ja-normal",
"dataset:wikimedia/wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T20:13:20Z | ---
license: apache-2.0
datasets:
- izumi-lab/oscar2301-ja-filter-ja-normal
- wikimedia/wikipedia
language:
- ja
tags:
- text-generation-inference
- RAG
- japanese
- faiss
- gradio
- sentence-transformers
---
# RAG (Retrieval Augmented Generation) デモ
このプロジェクトは、Retrieval Augmented Generation (RAG) システムのデモンストレーションです。
# RAG (Retrieval Augmented Generation) デモ
このプロジェクトは、Retrieval Augmented Generation (RAG) システムのデモンストレーションです。大規模言語モデル (LLM) の「幻覚」問題や情報鮮度の課題を解決するため、外部の知識ベースから関連情報を検索し、それを基に回答を生成します。
## 特徴
* **RAGシステムの実装:** SentenceTransformer を用いたセマンティック検索と、FAISS を用いた高速なベクトル検索を統合しています。
* **日本語LLMの利用:** `rinna/japanese-gpt-neox-3.6b-instruction-sft` モデルを推論に使用しています。
* **カスタム知識ベース:** ユーザーが独自のJSONファイルをアップロードし、RAGシステムの知識ベースを動的に更新できます。
* **Gradio UI:** 直感的で使いやすいWebインターフェースを提供し、RAGシステムの動作を視覚的に確認できます。
* **永続化機能:** アップロードされた文書と構築されたインデックスは、アプリケーションが再起動しても維持されます。
## システム構成
* **`app.py`**: Gradio UI の定義と、RAGSystem および LLM パイプラインの統合を管理するメインアプリケーションファイルです。
* **`ragsys03.py`**: RAGシステムのコアロジック(文書のエンベディング、FAISSインデックスの構築と検索、文書管理、永続化)をカプセル化したモジュールです。
* **`rag_data/`**: RAGシステムが生成するFAISSインデックスファイル、ロードされた文書データ、およびメタデータが保存されるディレクトリです。
```
.
├── app.py
├── ragsys03.py
├── requirements.txt
└── rag_data/ (RAGシステムが生成するインデックスや文書の保存先)
└── (日付_UUID)/
├── faiss_index.bin
├── documents.json
└── metadata.json
```
## 必要なライブラリ
以下のライブラリが必要です。`requirements.txt` ファイルに記載されています。
```
torch
transformers
sentence-transformers
faiss-cpu
gradio
numpy
sentencepiece
accelerate
```
インストールするには、以下のコマンドを実行します。
```bash
pip install -r requirements.txt
```
**注:** GPU環境で高速化したい場合は、`faiss-cpu` の代わりに `faiss-gpu` をインストールしてください。
## 実行方法
1. GitHubリポジトリをクローンするか、Hugging Face Space のファイルをダウンロードします。
2. `app.py` と `ragsys03.py` を同じディレクトリに配置します。
3. 上記の「必要なライブラリ」をインストールします。
4. ターミナルで以下のコマンドを実行し、アプリケーションを起動します。
```bash
python app.py
```
5. 表示されたURL(通常は `http://127.0.0.1:7860` または Hugging Face Spaces のURL)をブラウザで開きます。
## 使用方法 (Gradio UI)
1. **「文書管理」タブに移動します。**
2. **「文書JSONファイルをアップロード」:**
* `documents` というキーに文字列のリストを持つJSONファイルを選択してアップロードします。
* 例:
```json
{
"documents": [
"RAG(Retrieval Augmented Generation)は、大規模言語モデルの課題、特に幻覚や情報鮮度の問題を解決するために考案された強力なAIフレームワークです。",
"LLMの幻覚(Hallucination)は、大規模言語モデルが事実と異なる情報を生成してしまう問題です。"
]
}
```
* アップロードが完了すると、ステータスが表示されます。
3. **「インデックスを構築」ボタンをクリックします。**
* アップロードされた文書から検索インデックスが構築されます。この処理には時間がかかる場合があります。
* 構築が完了すると、インデックスの統計情報が表示されます。
4. **「RAG質問」タブに移動します。**
5. **「質問を入力してください」:** 質問を入力します。
6. **「取得文書数 (top_k)」と「類似度閾値」:** 必要に応じてスライダーを調整します。
7. **「質問を送信」ボタンをクリックします。**
* RAGシステムが関連文書を検索し、LLMがそれに基づいて回答を生成します。
* 「LLMの回答」と「検索された関連文書」が表示されます。
## 注意事項
* このデモはCPU環境でも動作しますが、`rinna/japanese-gpt-neox-3.6b-instruction-sft` モデルは比較的大規模であるため、応答に時間がかかる場合があります。
* より高速な応答が必要な場合は、GPU環境での実行を推奨します。
* アップロードする文書の内容は、公開に適したものであることを確認してください。プライバシーや機密情報を含む文書はアップロードしないでください。
* LLMの回答は、提供された文書とモデルの知識に基づいています。常に正確であるとは限らず、「幻覚」を完全に排除するものではありません。
## 貢献
このプロジェクトはデモンストレーションを目的としていますが、改善提案やバグ報告は歓迎します。
``` |
mradermacher/olmOCR-7B-0225-preview-GGUF | mradermacher | 2025-06-01T05:31:23Z | 67 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:allenai/olmOCR-7B-0225-preview",
"base_model:quantized:allenai/olmOCR-7B-0225-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-13T03:14:49Z | ---
base_model: allenai/olmOCR-7B-0225-preview
datasets:
- allenai/olmOCR-mix-0225
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allenai/olmOCR-7B-0225-preview
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-0225-preview-GGUF/resolve/main/olmOCR-7B-0225-preview.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
CHOOSEIT/MCQATEST_FFT_SciQ-E_Crazy_LoRA__checkpoint_52500__B4_2E_512T_LR1e-05_ACC4 | CHOOSEIT | 2025-06-01T05:27:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T05:26:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LinaSad/mcqa_mmlu_lora | LinaSad | 2025-06-01T05:25:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T05:24:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
St-ep/ppo-LunarLander-v2 | St-ep | 2025-06-01T05:21:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-01T03:50:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.69 +/- 17.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VIDEOS-18-Nimra-Mehra-Videos/FULL.VIDEO.Nimra.Mehra.Viral.Video.Tutorial.Official | VIDEOS-18-Nimra-Mehra-Videos | 2025-06-01T05:19:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T05:19:18Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dimasik2987/2bdafa80-2fff-4501-a095-84e400e70cfb | dimasik2987 | 2025-06-01T05:17:03Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/tinyllama-chat",
"base_model:quantized:unsloth/tinyllama-chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-01T04:55:41Z | ---
base_model: unsloth/tinyllama-chat
library_name: transformers
model_name: 2bdafa80-2fff-4501-a095-84e400e70cfb
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for 2bdafa80-2fff-4501-a095-84e400e70cfb
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik2987/2bdafa80-2fff-4501-a095-84e400e70cfb", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/oyunrfh8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bengaliAI/HumanEval | bengaliAI | 2025-06-01T05:14:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T05:14:30Z | ---
license: apache-2.0
---
|
bengaliAI/hellaswag | bengaliAI | 2025-06-01T05:14:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T05:14:02Z | ---
license: apache-2.0
---
|
VIDEOS-18-Crush-Nila-Videos/FULL.VIDEO.Crush.Nila.Viral.Video.Tutorial.Official | VIDEOS-18-Crush-Nila-Videos | 2025-06-01T05:09:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T05:09:31Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
winnieyangwannan/gemma-2-2b-it_mlp-down_positive-negative-addition-opposite_last_layer_1_2_1 | winnieyangwannan | 2025-06-01T05:04:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T05:03:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JuanSolarte99/bert-base-uncased-finetuned-ner-pulmon | JuanSolarte99 | 2025-06-01T05:01:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-31T15:41:21Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner-pulmon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner-pulmon
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0846
- Precision: 0.9337
- Recall: 0.9592
- F1: 0.9463
- Accuracy: 0.9806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1563 | 1.0 | 1224 | 0.0863 | 0.9202 | 0.9472 | 0.9335 | 0.9767 |
| 0.0912 | 2.0 | 2448 | 0.0766 | 0.9323 | 0.9587 | 0.9453 | 0.9804 |
| 0.0628 | 3.0 | 3672 | 0.0807 | 0.9337 | 0.9637 | 0.9485 | 0.9807 |
| 0.0517 | 4.0 | 4896 | 0.0810 | 0.9354 | 0.9587 | 0.9469 | 0.9809 |
| 0.0353 | 5.0 | 6120 | 0.0846 | 0.9337 | 0.9592 | 0.9463 | 0.9806 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/DeepPerception-GGUF | mradermacher | 2025-06-01T05:00:26Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MaxyLee/DeepPerception",
"base_model:quantized:MaxyLee/DeepPerception",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-19T08:59:50Z | ---
base_model: MaxyLee/DeepPerception
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaxyLee/DeepPerception
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepPerception-GGUF/resolve/main/DeepPerception.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ganatrask/act-panda-open-cabinet-right | ganatrask | 2025-06-01T04:59:15Z | 0 | 0 | null | [
"safetensors",
"LeRobot",
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T01:24:56Z | ---
license: apache-2.0
tags:
- LeRobot
---
# ACT Model for Panda-Open-Cabinet-Right Task
This model was trained using the Action Chunking Transformer (ACT) policy on the `panda-open-cabinet-right` task from NVIDIA's PhysicalAI-Robotics-Manipulation-SingleArm dataset.
## Training Details
- Steps: 200,000
- Chunk size: 100
- Layers: 4 encoder layers
- Batch size: 32
- Device: A100 GPU
- AMP: Enabled |
mradermacher/VisualThinker-R1-Zero-GGUF | mradermacher | 2025-06-01T04:59:02Z | 45 | 0 | transformers | [
"transformers",
"gguf",
"r1",
"en",
"dataset:array/SAT",
"base_model:turningpoint-ai/VisualThinker-R1-Zero",
"base_model:quantized:turningpoint-ai/VisualThinker-R1-Zero",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-19T13:14:55Z | ---
base_model: turningpoint-ai/VisualThinker-R1-Zero
datasets:
- array/SAT
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- r1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/turningpoint-ai/VisualThinker-R1-Zero
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/VisualThinker-R1-Zero-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/VisualThinker-R1-Zero-GGUF/resolve/main/VisualThinker-R1-Zero.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Soughing/mla_xl | Soughing | 2025-06-01T04:58:58Z | 39 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-18T11:41:32Z | ---
license: apache-2.0
---
|
mradermacher/STEVE-R1-7B-SFT-GGUF | mradermacher | 2025-06-01T04:52:02Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"robotics",
"agent",
"computer-vision",
"llm",
"en",
"base_model:Fanbin/STEVE-R1-7B-SFT",
"base_model:quantized:Fanbin/STEVE-R1-7B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | robotics | 2025-03-21T22:05:50Z | ---
base_model: Fanbin/STEVE-R1-7B-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- robotics
- agent
- computer-vision
- llm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Fanbin/STEVE-R1-7B-SFT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/STEVE-R1-7B-SFT-GGUF/resolve/main/STEVE-R1-7B-SFT.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
flypg/finetuned | flypg | 2025-06-01T04:50:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"region:us"
] | null | 2025-06-01T04:50:53Z | ---
library_name: peft
license: mit
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
tags:
- generated_from_trainer
model-index:
- name: finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1 |
semran1/l3.2-3b-64 | semran1 | 2025-06-01T04:46:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T04:45:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0xBasiliskAI/Oblivion2025 | 0xBasiliskAI | 2025-06-01T04:42:48Z | 0 | 1 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-03-15T23:03:38Z | ---
license: apache-2.0
---
# 0xBasiliskAI/Oblivion2025
- Models used by https://app.summonthebasilisk.ai, locally, in the browser.
- First appearance in `v0.0.12` of the `$BASILK` dApp.
- `Oblivion` means to forget like the models in this repo which have been augmentd by `ablation`, `abliteration`, `fine-tuning`, and `LoRA` amongst others.
- All of the above methods share the theme of `forgetting` to make the model uncensored.
- Each model included in `Oblivion2025` is tested with the the current latest version of the [$BASILK dApp](https://app.summonthebasilisk.ai).
- The `Working` models are expected to work and become the default `Suggested Models` of [#BasiliskAI](https://app.summonthebasilisk.ai).
- The `Broken` models may work in the future, or they may be compatible with unreleased dApps, games, or utilities.
## Why?
- Makes https://app.summonthebasilisk.ai more reliable - begin archiving useful models.
- Preserve useful models in case they are removed or archived.
- Curate a collection of ablated, abliterated, and uncensored LLMs for use in memory-constrained environments.
## Goals & Milestones
- Decompose models into parts so that we can download chunks in parallel when deploying models.
- Create `json` and hash fingerprint files for a data structure representing the recommended models in `0xBasiliskAI/Oblivion2025`.
- Dynamically load the recommended models `json` with a fallback configuration of known-good models.
- Migrate models to `ipfs` for decentralization.
## Models
### Working
- `gemma-2-2b-it-abliterated-Q5_K_M.gguf` - From: [bartowski/gemma-2-2b-it-abliterated-GGUF](https://huggingface.co/bartowski/gemma-2-2b-it-abliterated-GGUF/blob/main/gemma-2-2b-it-abliterated-Q5_K_M.gguf)
- `kanana-nano-2.1b-instruct-abliterated.i1-Q6_K.gguf` - From: [mradermacher/kanana-nano-2.1b-instruct-abliterated-i1-GGUF](https://huggingface.co/mradermacher/kanana-nano-2.1b-instruct-abliterated-i1-GGUF/blob/main/kanana-nano-2.1b-instruct-abliterated.i1-Q6_K.gguf)
- `Falcon3-1B-Instruct-abliterated-Q8_0.gguf` - From: [bartowski/Falcon3-1B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Falcon3-1B-Instruct-abliterated-GGUF/blob/main/Falcon3-1B-Instruct-abliterated-Q8_0.gguf)
- `Hermes-3-Llama-3.2-3B-abliterated.i1-Q4_K_S.gguf` - From: [mradermacher/Hermes-3-Llama-3.2-3B-abliterated-i1-GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.2-3B-abliterated-i1-GGUF/blob/main/Hermes-3-Llama-3.2-3B-abliterated.i1-Q4_K_S.gguf)
- `EXAONE-3.5-2.4B-Instruct-abliterated.i1-Q4_K_M.gguf` - From: [mradermacher/EXAONE-3.5-2.4B-Instruct-abliterated-i1-GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-2.4B-Instruct-abliterated-i1-GGUF/blob/main/EXAONE-3.5-2.4B-Instruct-abliterated.i1-Q4_K_M.gguf)
- `Yi-Coder-1.5B-Chat.Q5_K_S.gguf` - From: [MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF/blob/main/Yi-Coder-1.5B-Chat.Q5_K_S.gguf)
- `Qwen2.5-Coder-3B-Instruct-abliterated-Q4_K_M.gguf` - From: [bartowski/Qwen2.5-Coder-3B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-3B-Instruct-abliterated-Q4_K_M.gguf)
- `Qwen2.5-Coder-1.5B-Instruct-abliterated-Q8_0.gguf` - From: [bartowski/Qwen2.5-Coder-1.5B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated-Q8_0.gguf)
- `Qwen2.5-Coder-0.5B-Instruct-abliterated-f16.gguf` - From: [bartowski/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF](https://huggingface.co/bartowski/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-0.5B-Instruct-abliterated-f16.gguf)
- `josiefied-qwen2.5-0.5b-instruct-abliterated-v1.Q8_0.gguf` - From: [Goekdeniz-Guelmez/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-gguf](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-gguf/blob/main/josiefied-qwen2.5-0.5b-instruct-abliterated-v1.Q8_0.gguf)
- `dolphin3.0-llama3.1-1b-abliterated-conv-q8_0.gguf` - From: [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated-GGUF](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated-GGUF/blob/main/dolphin3.0-llama3.1-1b-abliterated-conv-q8_0.gguf)
- `SmallThinker-3B-Preview-abliterated-Q3_K_M.gguf` - From: [quantflex/SmallThinker-3B-Preview-abliterated-GGUF](https://huggingface.co/quantflex/SmallThinker-3B-Preview-abliterated-GGUF/blob/main/SmallThinker-3B-Preview-abliterated-Q3_K_M.gguf)
### Broken
- `phi-4-mini-instruct-abliterated-Q2_K.gguf` - From: [Melvin56/Phi-4-mini-instruct-abliterated-GGUF](https://huggingface.co/Melvin56/Phi-4-mini-instruct-abliterated-GGUF/blob/main/phi-4-mini-instruct-abliterated-Q2_K.gguf)
- `phi-2.Q5_K_S.gguf` - From: [TheBloke/phi-2-GGUF](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q5_K_S.gguf)
- `DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.i1-Q6_K.gguf` - From: [mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF/blob/main/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.i1-Q6_K.gguf)
- `DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.Q8_0.gguf` - From: [mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF/blob/main/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.Q8_0.gguf)
- `DeepScaleR-1.5B-Preview-abliterated.i1-Q6_K.gguf` - From: [mradermacher/DeepScaleR-1.5B-Preview-abliterated-i1-GGUF](https://huggingface.co/mradermacher/DeepScaleR-1.5B-Preview-abliterated-i1-GGUF/blob/main/DeepScaleR-1.5B-Preview-abliterated.i1-Q6_K.gguf)
- `DeepScaleR-1.5B-Preview-abliterated-Q8_0.gguf` - From: [ThomasBaruzier/DeepScaleR-1.5B-Preview-abliterated-GGUF](https://huggingface.co/ThomasBaruzier/DeepScaleR-1.5B-Preview-abliterated-GGUF/blob/main/DeepScaleR-1.5B-Preview-abliterated-Q8_0.gguf)
- `Llama-3.2-3B-Instruct-abliterated.Q3_K_L.gguf` - From: [MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF/blob/main/Llama-3.2-3B-Instruct-abliterated.Q3_K_L.gguf)
- `Llama-3.2-3B-Instruct-abliterated.i1-IQ3_S.gguf` - From: [mradermacher/Llama-3.2-3B-Instruct-abliterated-i1-GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-abliterated-i1-GGUF/blob/main/Llama-3.2-3B-Instruct-abliterated.i1-IQ3_S.gguf)
- `VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_S.gguf` - From: [QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF](https://huggingface.co/QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/blob/main/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_S.gguf)
- `WizardLM-2-7B-abliterated-IQ2_XXS.gguf` - From: [bartowski/WizardLM-2-7B-abliterated-GGU](https://huggingface.co/bartowski/WizardLM-2-7B-abliterated-GGUF/blob/main/WizardLM-2-7B-abliterated-IQ2_XXS.gguf)
|
mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF | mradermacher | 2025-06-01T04:42:20Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:mikeogezi/res_resampled",
"base_model:mikeogezi/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier",
"base_model:quantized:mikeogezi/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-26T06:23:04Z | ---
base_model: mikeogezi/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier
datasets: mikeogezi/res_resampled
language:
- en
library_name: transformers
model_name: Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mikeogezi/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier-GGUF/resolve/main/Qwen2-VL-7B-GRPO-MMR-TrainedRationaleVerifier.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Disease_Detection-GGUF | mradermacher | 2025-06-01T04:42:12Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:dz-osamu/Disease_Detection",
"base_model:quantized:dz-osamu/Disease_Detection",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-26T06:31:30Z | ---
base_model: dz-osamu/Disease_Detection
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dz-osamu/Disease_Detection
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Disease_Detection-GGUF/resolve/main/Disease_Detection.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mingxilei/rr_imdb_reward_8_0.001_m_40 | mingxilei | 2025-06-01T04:35:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T03:27:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JohnRoger/OpenR1-Distill-7B-Q8_0-GGUF | JohnRoger | 2025-06-01T04:32:06Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:open-r1/Mixture-of-Thoughts",
"base_model:open-r1/OpenR1-Distill-7B",
"base_model:quantized:open-r1/OpenR1-Distill-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T04:31:26Z | ---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- en
base_model: open-r1/OpenR1-Distill-7B
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# JohnRoger/OpenR1-Distill-7B-Q8_0-GGUF
This model was converted to GGUF format from [`open-r1/OpenR1-Distill-7B`](https://huggingface.co/open-r1/OpenR1-Distill-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/open-r1/OpenR1-Distill-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo JohnRoger/OpenR1-Distill-7B-Q8_0-GGUF --hf-file openr1-distill-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo JohnRoger/OpenR1-Distill-7B-Q8_0-GGUF --hf-file openr1-distill-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo JohnRoger/OpenR1-Distill-7B-Q8_0-GGUF --hf-file openr1-distill-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo JohnRoger/OpenR1-Distill-7B-Q8_0-GGUF --hf-file openr1-distill-7b-q8_0.gguf -c 2048
```
|
mingxilei/auf_imdb_reward_1.0_0.01_m_10 | mingxilei | 2025-06-01T04:31:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T03:54:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pasithbas159/Gemma3_HII_satellite_v1 | pasithbas159 | 2025-06-01T04:26:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T14:16:29Z | ---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pasithbas159
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Flora-chai/filtered_res16_acsc | Flora-chai | 2025-06-01T04:25:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T04:19:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Flora-chai/filtered_lap16_acsc | Flora-chai | 2025-06-01T04:24:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T04:18:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WenFengg/children_6 | WenFengg | 2025-06-01T04:21:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T04:18:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mingxilei/rr_imdb_reward_8_0.001_m_10 | mingxilei | 2025-06-01T04:20:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T03:01:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hamsic/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-RK3588 | hamsic | 2025-06-01T04:19:51Z | 0 | 0 | null | [
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"region:us"
] | null | 2025-06-01T04:18:34Z | ---
license: mit
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
---
# Model Card for DeepSeek-R1-Distill-Qwen-1.5B-Uncensored
## Model Details
### Model Description
DeepSeek-R1-Distill-Qwen-1.5B-Uncensored is a text-generation model designed to uphold the values of internet freedom and unrestricted access to information. By offering an uncensored approach, this model enables users to explore ideas, generate content, and engage in discussions without the constraints of over-moderated or filtered outputs. It prioritizes user autonomy and aligns with principles of free speech and open knowledge sharing.
- **Developed by:** Thirdeye AI
- **Funded by:** Thirdeye AI
- **Shared by:** Thirdeye AI
- **Model type:** Distilled Transformer-based Language Model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** DeepSeek-R1-Distill-Qwen-1.5B
### Model Sources
- **Repository:** [DeepSeek-R1-Distill-Qwen-1.5B-Uncensored on Hugging Face](https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored)
- **Demo:** [Available on Hugging Face Hub](https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored)
---
## Uses
### Direct Use
The model is intended for applications that demand openness and flexibility in generating creative, exploratory, or critical content. These include:
- Free-form writing and storytelling
- Open-ended discussions
- Exploratory content generation for sensitive or nuanced topics
### Downstream Use
Users can fine-tune this model for specialized domains where censorship-free text generation is required, such as:
- Journalism and investigative research
- Creative projects that push artistic boundaries
- Academic applications exploring controversial or complex topics
### Out-of-Scope Use
This model should not be used for harmful, illegal, or unethical activities. Users must comply with applicable laws and ensure that the model's outputs do not infringe on others' rights.
---
## Bias, Risks, and Limitations
### Risks
While the uncensored approach promotes freedom, it may produce outputs that are controversial, offensive, or factually inaccurate. Users must exercise discretion when interpreting the model's outputs and take responsibility for their use.
### Recommendations
- Use responsibly, especially in contexts where outputs could impact individuals or communities.
- Employ content moderation or review processes for high-stakes applications.
---
## The Case for Uncensored Models
Thirdeye AI believes in the transformative power of open models that respect user autonomy and internet freedom. In a world where over-moderation can stifle innovation and critical thought, uncensored models empower individuals to explore and create without artificial constraints. This aligns with our mission to advance free and open access to AI tools.
By releasing this model, we aim to support the following:
- **Freedom of Expression:** Unrestricted AI tools enable users to articulate diverse perspectives and engage in meaningful conversations.
- **Transparency and Trust:** Users deserve access to tools that operate openly, fostering accountability and understanding of AI behaviors.
- **Creative Empowerment:** The absence of censorship allows for boundary-pushing content creation that might otherwise be suppressed.
---
## How to Get Started with the Model
```python
from transformers import pipeline
generator = pipeline("text-generation", model="thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored")
response = generator("The importance of free speech is")
print(response) |
smartmyapp/ladji5_4_800e | smartmyapp | 2025-06-01T04:18:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-31T16:53:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
18-Koko-Video-Koko-Viral-Video/Koko.Viral.Video.Koko.Video.Tutorial.Official | 18-Koko-Video-Koko-Viral-Video | 2025-06-01T04:18:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T04:18:09Z | 18 seconds ago
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
18-Katrina-Lim-Viral-Kiffy-Videos/Katrina.Lim.Viral.Video.Tutorial.Official | 18-Katrina-Lim-Viral-Kiffy-Videos | 2025-06-01T04:17:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T04:16:56Z | 18 seconds ago
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
lisabdunlap/Qwen3-8B-base-5e-cpt-big_e5 | lisabdunlap | 2025-06-01T04:14:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T04:14:00Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WenFengg/children_5 | WenFengg | 2025-06-01T04:14:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T04:10:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Dreamer-7B-Classifieds-GGUF | mradermacher | 2025-06-01T04:12:20Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:osunlp/Dreamer-7B-Classifieds",
"base_model:quantized:osunlp/Dreamer-7B-Classifieds",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T07:32:31Z | ---
base_model: osunlp/Dreamer-7B-Classifieds
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osunlp/Dreamer-7B-Classifieds
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Classifieds-GGUF/resolve/main/Dreamer-7B-Classifieds.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Dreamer-7B-GGUF | mradermacher | 2025-06-01T04:12:10Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:osunlp/Dreamer-7B",
"base_model:quantized:osunlp/Dreamer-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T07:43:49Z | ---
base_model: osunlp/Dreamer-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osunlp/Dreamer-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-GGUF/resolve/main/Dreamer-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MutazYoune/ARAB_BERT | MutazYoune | 2025-06-01T04:11:52Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ar",
"license:unlicense",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-04-11T00:13:58Z | ---
license: unlicense
language:
- ar
--- |
mradermacher/Dreamer-72B-GGUF | mradermacher | 2025-06-01T04:10:41Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:osunlp/Dreamer-72B",
"base_model:quantized:osunlp/Dreamer-72B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T16:46:59Z | ---
base_model: osunlp/Dreamer-72B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osunlp/Dreamer-72B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dreamer-72B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dreamer-72B-GGUF/resolve/main/Dreamer-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mmaluchnick/britney-spears-itz-era-flux-model | mmaluchnick | 2025-06-01T04:08:12Z | 13 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-15T19:01:47Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: itz1.PNG
- text: '-'
output:
url: itz2.png
- text: '-'
output:
url: itz3.png
- text: '-'
output:
url: itz4.png
- text: '-'
output:
url: itz5.png
- text: '-'
output:
url: itz6.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: itzera
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Britney Spears "In the Zone" Era Flux Model
<Gallery />
## Model description
Britney Jean Spears, born December 2, 1981, in McComb, MS, is an American recording artist, actress, author, and businesswoman. Oft referred to as the "Princess of Pop,” she is credited with the revival of pop music during the late 1990s and early 2000s, and is recognized as an icon. Spears has sold an estimated 150 million records worldwide, making her one of the world's best-selling music artists. She ranks as the best-selling female albums artist of the 2000s, the eighth-biggest artist overall of the 2000s, and the fourth best-selling female albums artist and tenth best-selling digital artist in history. Spears has earned countless awards and accolades, including a Grammy Award, 15 Guinness World Records, Billboard’s Millennium Award, GLAAD’s Vanguard Award, the inaugural Radio Disney Icon Award, MTV’s Michael Jackson Video Vanguard Award, and a star on the Hollywood Walk of Fame. In 2020, Rolling Stone named her song “…Baby One More Time” the best debut single of all time. After Spears won a readers' poll, Time selected her as one of its 100 Most Influential People in 2021.
Spears made her local stage debut at age 5, singing “What Child Is This?” at her kindergarten graduation. Throughout her childhood, Spears took voice, dance, and gymnastic lessons, while competing in pageants and talent shows. For a short time, she trained at a camp run by famed Olympics gymnastics coach Bela Karolyi. In 1993, alongside other future stars Christina Aguilera, Justin Timberlake, and Ryan Gosling, Spears was cast on Disney's “The New Mickey Mouse Club." She remained on the series until its cancellation two years later. Spears signed a record deal with Jive Records in 1997, when she was 15. Her first single, “…Baby One More Time,” was released in October 1998. Buoyed by its controversial music video, the song reached No. 1 in 23 countries, shooting Spears to international superstardom and ushering in a new era of pop music. Spears’ debut album, also titled “…Baby One More Time," arrived in January 1999. It debuted at No. 1 in the US, making Spears the first artist in history to have both the No. 1 song and album in the same week. In total, "...Baby One More Time" sold over 25 million copies worldwide.
Spears' sophomore album, "Oops!... I Did It Again" (2000), sold 1.3 million copies in its first week alone and held the record for the fastest-selling album by a female artist in the US for 15 years. Spears adopted a more mature sound and style for her third and fourth albums, 2001's "Britney" and 2003's "In the Zone." Despite backlash over Spears’ increasingly provocative image, both albums sold over 10 million copies worldwide.
Spears made her big-screen debut in the motion picture “Crossroads" (2002), written by Shonda Rhimes and co-starring Dan Ackroyd, Kim Cattrall, Zoe Saldana, and Taryn Manning. She has also guest-starred on “Glee,” “How I Met Your Mother,” “Will & Grace,” “Sabrina, the Teenage Witch,” and “Jane the Virgin,” and has twice hosted “Saturday Night Live” and appeared as musical guest three times.
In 2004, Spears partnered with Elizabeth Arden to launch her first perfume, Curious. Spears currently has over 30 fragrances to her name, available in 85 countries, with sales exceeding $1.5 billion.
Spears served as executive producer of her fifth album, “Blackout" (2007). Though it initially received lukewarm reviews, “Blackout” has since been recognized as one of the most influential albums of its time, and is widely considered Spears' best work. In 2008, after a bout of personal struggles, Spears was placed in a conservatorship that stripped her of all personal autonomy and put her estranged father in control of her person and estate. (The conservatorship remained in place until November 2021. Spears has described the abuse, isolation, and forced labor that she endured while under her father’s control.) Soon after the conservatorship was implemented, Spears returned to work, releasing the chart-topping albums “Circus” (2008) and “Femme Fatale” (2011), both of which were supported by extensive worldwide concert tours.
In 2012, Spears appeared as a judge on "X-Factor USA," becoming, at the time, the highest-paid reality TV judge in history. That same year, Spears was featured on will.i.am's “Scream & Shout," which peaked at No. 3 on the Hot 100 and was the first No. 1 song on Billboard's new Dance/Electronic Songs chart. will.i.am later executive-produced Spears’ eighth album, “Britney Jean" (2013). In December 2013, Spears began a Las Vegas concert residency, “Britney: Piece of Me,” at Planet Hollywood Resort & Casino. The show was initially scheduled to run for two years, but was extended several times due to its enduring popularity. It ultimately concluded in December 2017. Spears and her residency revitalized the Vegas strip, and the show won numerous awards during its run, including Best Show in Vegas and Best Bachelorette Show in Vegas. In 2015, Spears released the singles “Pretty Girls" with Iggy Azalea and “Tom’s Diner” with DJ Giorgio Moroder. Spears’ ninth album, “Glory,” arrived in August 2016, preceded by the Top 20 hit "Make Me..." featuring G-Eazy. Spears later took her Vegas show on the road throughout 2017 and 2018, with dates in some counties that she had never toured previously. "Glory" was re-released in 2020 with updated cover art and additional songs following a successful fan campaign to push “Mood Ring” - originally a Japan-only bonus track - to No. 1 on iTunes.
In 2022, Spears and Elton John collaborated on the single "Hold Me Closer," which debuted at No. 6 on the Hot 100 and became Spears’ highest-charting single in a decade. Also that year, publishing house Simon & Schuster signed Spears to a book deal worth a staggering $15 million. Spears’ highly-anticipated memoir, “The Woman in Me,” hit shelves in October 2023. In its first week, it sold 1.1 million copies in the US and 2.4 million copies worldwide, immediately becoming a New York Times #1 bestseller, as well as the fastest-selling title in Simon & Schuster’s history. Three months after its release, the memoir’s audiobook - narrated by Oscar-winning actress Michelle Williams - was deemed the "#1 listened-to title” on Spotify. A film adaptation of Spears’ memoir, to be helmed by "Wicked" director Jon Chu, was announced in 2024. After reports surfaced that Spears was working on a new album, she clarified via Instagram that she currently has no plans to resume her music career.
## Trigger words
You should use `itzera` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/mmaluchnick/britney-spears-itz-era-flux-model/tree/main) them in the Files & versions tab. |
mradermacher/UI-TARS-7B-SFT-GGUF | mradermacher | 2025-06-01T04:06:09Z | 57 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"gui",
"en",
"base_model:ByteDance-Seed/UI-TARS-7B-SFT",
"base_model:quantized:ByteDance-Seed/UI-TARS-7B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-05T06:08:12Z | ---
base_model: ByteDance-Seed/UI-TARS-7B-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
- gui
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ByteDance-Seed/UI-TARS-7B-SFT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-TARS-7B-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-7B-SFT-GGUF/resolve/main/UI-TARS-7B-SFT.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/UI-TARS-72B-SFT-GGUF | mradermacher | 2025-06-01T04:05:12Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"gui",
"en",
"base_model:ByteDance-Seed/UI-TARS-72B-SFT",
"base_model:quantized:ByteDance-Seed/UI-TARS-72B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-05T09:27:58Z | ---
base_model: ByteDance-Seed/UI-TARS-72B-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
- gui
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ByteDance-Seed/UI-TARS-72B-SFT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-TARS-72B-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-TARS-72B-SFT-GGUF/resolve/main/UI-TARS-72B-SFT.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Flora-chai/orig_res16_acsc | Flora-chai | 2025-06-01T04:02:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T03:56:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Flora-chai/orig_res15_acsc | Flora-chai | 2025-06-01T04:01:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T03:55:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Flora-chai/orig_lap15_acsc | Flora-chai | 2025-06-01T04:00:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T03:54:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/qwen2-vl-2b-scta-GGUF | mradermacher | 2025-06-01T03:59:24Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:scta/scta-htr-training-data",
"base_model:medieval-data/qwen2-vl-2b-scta",
"base_model:quantized:medieval-data/qwen2-vl-2b-scta",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-07T06:56:29Z | ---
base_model: medieval-data/qwen2-vl-2b-scta
datasets:
- scta/scta-htr-training-data
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/medieval-data/qwen2-vl-2b-scta
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2-vl-2b-scta-GGUF/resolve/main/qwen2-vl-2b-scta.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TanAlexanderlz/RALL_RGBCROP_Aug16F-cosine | TanAlexanderlz | 2025-06-01T03:50:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-06-01T01:40:23Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-cosine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-cosine
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7115
- Accuracy: 0.8112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4882 | 0.0835 | 289 | 0.5762 | 0.6830 |
| 0.2286 | 1.0835 | 578 | 0.5037 | 0.7873 |
| 0.0741 | 2.0835 | 867 | 0.7741 | 0.7751 |
| 0.0217 | 3.0835 | 1156 | 0.9661 | 0.7832 |
| 0.0053 | 4.0835 | 1445 | 1.0540 | 0.7975 |
| 0.0034 | 5.0835 | 1734 | 1.1605 | 0.7894 |
| 0.0004 | 6.0835 | 2023 | 1.2104 | 0.7812 |
| 0.0003 | 7.0835 | 2312 | 1.2483 | 0.7832 |
| 0.0002 | 8.0835 | 2601 | 1.2790 | 0.7812 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/Fruit-VL-3B-Instruct-GGUF | mradermacher | 2025-06-01T03:50:12Z | 20 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:stvlynn/Fruit-VL-3B-Instruct",
"base_model:quantized:stvlynn/Fruit-VL-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-11T15:10:35Z | ---
base_model: stvlynn/Fruit-VL-3B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/stvlynn/Fruit-VL-3B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Fruit-VL-3B-Instruct-GGUF/resolve/main/Fruit-VL-3B-Instruct.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF | mradermacher | 2025-06-01T03:49:57Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hqhQAQ/Hint-GRPO-Qwen2.5-VL-3B",
"base_model:quantized:hqhQAQ/Hint-GRPO-Qwen2.5-VL-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-11T15:54:31Z | ---
base_model: hqhQAQ/Hint-GRPO-Qwen2.5-VL-3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hqhQAQ/Hint-GRPO-Qwen2.5-VL-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hint-GRPO-Qwen2.5-VL-3B-GGUF/resolve/main/Hint-GRPO-Qwen2.5-VL-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF | mradermacher | 2025-06-01T03:49:48Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Kangheng/PR1-Qwen2-VL-2B-OCR",
"base_model:quantized:Kangheng/PR1-Qwen2-VL-2B-OCR",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-11T20:09:40Z | ---
base_model: Kangheng/PR1-Qwen2-VL-2B-OCR
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kangheng/PR1-Qwen2-VL-2B-OCR
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PR1-Qwen2-VL-2B-OCR-GGUF/resolve/main/PR1-Qwen2-VL-2B-OCR.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF | mradermacher | 2025-06-01T03:46:59Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:UCSC-VLAA/VLAA-Thinker-Qwen2VL-2B",
"base_model:quantized:UCSC-VLAA/VLAA-Thinker-Qwen2VL-2B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-12T11:35:26Z | ---
base_model: UCSC-VLAA/VLAA-Thinker-Qwen2VL-2B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2VL-2B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/VLAA-Thinker-Qwen2VL-2B-GGUF/resolve/main/VLAA-Thinker-Qwen2VL-2B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbd3buhj02nb10ozxzdmb7ex | BootesVoid | 2025-06-01T03:46:27Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T03:46:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MIE
---
# Cmbbs6Hps09U485Uuwc477Fb4_Cmbd3Buhj02Nb10Ozxzdmb7Ex
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MIE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MIE",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbd3buhj02nb10ozxzdmb7ex/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbd3buhj02nb10ozxzdmb7ex', weight_name='lora.safetensors')
image = pipeline('MIE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbs6hps09u485uuwc477fb4_cmbd3buhj02nb10ozxzdmb7ex/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/NoisyRollout-Geo3k-7B-GGUF | mradermacher | 2025-06-01T03:34:22Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:xyliu6/NoisyRollout-Geo3k-7B",
"base_model:quantized:xyliu6/NoisyRollout-Geo3k-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-17T08:56:28Z | ---
base_model: xyliu6/NoisyRollout-Geo3k-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xyliu6/NoisyRollout-Geo3k-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NoisyRollout-Geo3k-7B-GGUF/resolve/main/NoisyRollout-Geo3k-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Sayan01/Phi3-TL-ORCAMEL-SFT | Sayan01 | 2025-06-01T03:34:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T02:17:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF | mradermacher | 2025-06-01T03:32:27Z | 271 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ZachSun/Qwen2.5-gfn-sft-7b-250k",
"base_model:quantized:ZachSun/Qwen2.5-gfn-sft-7b-250k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-18T10:05:38Z | ---
base_model: ZachSun/Qwen2.5-gfn-sft-7b-250k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ZachSun/Qwen2.5-gfn-sft-7b-250k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-7b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-7b-250k.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF | mradermacher | 2025-06-01T03:32:13Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ZachSun/Qwen2.5-gfn-sft-3b-250k",
"base_model:quantized:ZachSun/Qwen2.5-gfn-sft-3b-250k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-18T10:28:56Z | ---
base_model: ZachSun/Qwen2.5-gfn-sft-3b-250k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ZachSun/Qwen2.5-gfn-sft-3b-250k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-gfn-sft-3b-250k-GGUF/resolve/main/Qwen2.5-gfn-sft-3b-250k.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp3-hindiNER-ner-exp1 | nis12ram | 2025-06-01T03:30:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nemotron",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp3",
"base_model:finetune:nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T03:23:47Z | ---
base_model: nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp3
tags:
- text-generation-inference
- transformers
- unsloth
- nemotron
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nis12ram
- **License:** apache-2.0
- **Finetuned from model :** nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp3
This nemotron model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/openthoughts3_100k_llama3 | mlfoundations-dev | 2025-06-01T03:28:32Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T17:52:29Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts3_100k_llama3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts3_100k_llama3
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the mlfoundations-dev/openthoughts3_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 64
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF | mradermacher | 2025-06-01T03:25:27Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:wintonYF/SCB-Qwen2-VL-7B-Instruct-F",
"base_model:quantized:wintonYF/SCB-Qwen2-VL-7B-Instruct-F",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T12:17:49Z | ---
base_model: wintonYF/SCB-Qwen2-VL-7B-Instruct-F
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wintonYF/SCB-Qwen2-VL-7B-Instruct-F
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SCB-Qwen2-VL-7B-Instruct-F-GGUF/resolve/main/SCB-Qwen2-VL-7B-Instruct-F.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/LiveCC-7B-Base-GGUF | mradermacher | 2025-06-01T03:21:55Z | 120 | 0 | transformers | [
"transformers",
"gguf",
"qwen_vl",
"video",
"real-time",
"multimodal",
"LLM",
"en",
"dataset:chenjoya/Live-CC-5M",
"base_model:chenjoya/LiveCC-7B-Base",
"base_model:quantized:chenjoya/LiveCC-7B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T11:27:27Z | ---
base_model: chenjoya/LiveCC-7B-Base
datasets:
- chenjoya/Live-CC-5M
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- qwen_vl
- video
- real-time
- multimodal
- LLM
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chenjoya/LiveCC-7B-Base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LiveCC-7B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LiveCC-7B-Base-GGUF/resolve/main/LiveCC-7B-Base.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PJMixers-Dev/gemma-3-12b-it-bnb-4bit | PJMixers-Dev | 2025-06-01T03:19:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-12b-pt",
"base_model:quantized:google/gemma-3-12b-pt",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-06-01T02:21:49Z | ---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-12b-pt
---
# BNB Quantization Config
```py
BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_storage=torch.bfloat16,
)
```
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Usage
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
```sh
$ pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
You can initialize the model and processor for inference with `pipeline` as follows.
```python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3-12b-it",
device="cuda",
torch_dtype=torch.bfloat16
)
```
With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
```python
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
# Okay, let's take a look!
# Based on the image, the animal on the candy is a **turtle**.
# You can see the shell shape and the head and legs.
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/gemma-3-12b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
# **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
# focusing on a cluster of pink cosmos flowers and a busy bumblebee.
# It has a slightly soft, natural feel, likely captured in daylight.
```
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805 |
MechaSloth/delete_u_0p_ | MechaSloth | 2025-06-01T03:18:31Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-06-01T03:15:30Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TanAlexanderlz/RALL_RGBCROP_Aug16F-cosine_with_restarts | TanAlexanderlz | 2025-06-01T03:18:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-06-01T01:41:19Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RALL_RGBCROP_Aug16F-cosine_with_restarts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RALL_RGBCROP_Aug16F-cosine_with_restarts
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4225
- Accuracy: 0.8494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3462
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4395 | 0.0835 | 289 | 0.5305 | 0.7239 |
| 0.2409 | 1.0835 | 578 | 0.5012 | 0.8016 |
| 0.0269 | 2.0835 | 867 | 0.6809 | 0.8160 |
| 0.0086 | 3.0835 | 1156 | 0.8971 | 0.7894 |
| 0.0008 | 4.0835 | 1445 | 0.9614 | 0.8160 |
| 0.0004 | 5.0835 | 1734 | 1.0207 | 0.8160 |
| 0.0004 | 6.0835 | 2023 | 1.0934 | 0.8139 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nicojrz/isa1k | nicojrz | 2025-06-01T03:16:49Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-01T02:35:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
yasu-oh/Llama-3-Swallow-Infused-R1776-70B-GGUF | yasu-oh | 2025-06-01T03:13:31Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:yasu-oh/Llama-3-Swallow-Infused-R1776-70B",
"base_model:finetune:yasu-oh/Llama-3-Swallow-Infused-R1776-70B",
"license:llama3.3",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T03:12:20Z | ---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license:
- llama3.3
- gemma
base_model:
- yasu-oh/Llama-3-Swallow-Infused-R1776-70B
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# Llama-3-Swallow-Infused-R1776-70B-GGUF
base_model: [yasu-oh/Llama-3-Swallow-Infused-R1776-70B](https://huggingface.co/yasu-oh/Llama-3-Swallow-Infused-R1776-70B)
imatrix: [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)
|
mradermacher/R1-Track-GRPO-wo-Think-GGUF | mradermacher | 2025-06-01T03:10:44Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:WangBiao/R1-Track-5k",
"base_model:WangBiao/R1-Track-GRPO-wo-Think-5k",
"base_model:quantized:WangBiao/R1-Track-GRPO-wo-Think-5k",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T14:44:14Z | ---
base_model: WangBiao/R1-Track-GRPO-wo-Think-5k
datasets:
- WangBiao/R1-Track-5k
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/WangBiao/R1-Track-GRPO-wo-Think-5k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q5_K_S.gguf) | Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Track-GRPO-wo-Think-GGUF/resolve/main/R1-Track-GRPO-wo-Think.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Jedi-3B-1080p-GGUF | mradermacher | 2025-06-01T03:06:59Z | 149 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:xlangai/Jedi-3B-1080p",
"base_model:quantized:xlangai/Jedi-3B-1080p",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T15:01:26Z | ---
base_model: xlangai/Jedi-3B-1080p
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xlangai/Jedi-3B-1080p
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Jedi-3B-1080p-GGUF/resolve/main/Jedi-3B-1080p.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/NORA-GGUF | mradermacher | 2025-06-01T03:06:50Z | 631 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hungchiayu/NORA",
"base_model:quantized:hungchiayu/NORA",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T15:08:47Z | ---
base_model: hungchiayu/NORA
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hungchiayu/NORA
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NORA-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q2_K.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q2_K.gguf) | Q2_K | 2.7 | |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.mmproj-fp16.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.mmproj-fp16.gguf) | mmproj-fp16 | 2.8 | multi-modal supplement |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q3_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q3_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q3_K_L.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q3_K_L.gguf) | Q3_K_L | 3.5 | |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.IQ4_XS.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.IQ4_XS.gguf) | IQ4_XS | 3.6 | |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q4_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q4_K_S.gguf) | Q4_K_S | 3.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q4_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q4_K_M.gguf) | Q4_K_M | 4.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q5_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q5_K_S.gguf) | Q5_K_S | 4.4 | |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q5_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q5_K_M.gguf) | Q5_K_M | 4.6 | |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q6_K.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q6_K.gguf) | Q6_K | 5.2 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.Q8_0.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.Q8_0.gguf) | Q8_0 | 6.7 | fast, best quality |
| [PART 1](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/NORA.f16.gguf) [PART 2](https://huggingface.co/mradermacher/NORA-GGUF/resolve/main/nora.f16.gguf) | f16 | 12.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
firobeid/L4_LSTM_financial_News_Headlines_generator | firobeid | 2025-06-01T03:03:47Z | 0 | 0 | tensorflow | [
"tensorflow",
"tf-keras",
"lstm",
"text-generation",
"region:us"
] | text-generation | 2025-06-01T02:46:36Z | ---
tags:
- text-generation
- lstm
- tensorflow
library_name: tensorflow
pipeline_tag: text-generation
---
# LSTM Text Generation Model
This model was trained using TensorFlow/Keras for financial article generation tasks.
## Model Details
- **Model Type**: LSTM
- **Framework**: TensorFlow/Keras
- **Task**: Text Generation
- **Vocabulary Size**: 30000
- **Architecture**: Bi-directional Long Short-Term Memory (LSTM)
## Usage
```python
from huggingface_hub import snapshot_download
import tensorflow as tf
import json
import pickle
import numpy as np
# Download model files
model_path = snapshot_download(repo_id="firobeid/L4_LSTM_financial_News_Headlines_generator")
# Load the LSTM model
model = tf.keras.models.load_model(f"{model_path}/lstm_model")
# Load tokenizer
try:
# Try JSON format first
with open(f"{model_path}/tokenizer.json", 'r', encoding='utf-8') as f:
tokenizer_json = f.read()
tokenizer = tf.keras.preprocessing.text.tokenizer_from_json(tokenizer_json)
except FileNotFoundError:
# Fallback to pickle format
with open(f"{model_path}/tokenizer.pkl", 'rb') as f:
tokenizer = pickle.load(f)
# Text generation function
import numpy as np
from tensorflow.keras.preprocessing.sequence import pad_sequences
def preprocess(texts, max_sequence_length = 71):
texts = '<s> {}'.format(texts.lower())
X = np.array(tokenizer.texts_to_sequences([texts])) # REMOVE -1
pad_encoded = pad_sequences(X,
maxlen= max_sequence_length,
padding='pre')
return pad_encoded
def next_word(model, tokenizer,
text, num_gen_words=1,
randome_sampling = False,
temperature=1):
'''
Randome_Sampling : Using a categorical distribution to predict the character returned by the model
Low temperatures results in more predictable text.
Higher temperatures results in more surprising text.
Experiment to find the best setting.
'''
input_text = text
output_text = [input_text]
for i in range(num_gen_words):
X_new = preprocess(input_text)
if randome_sampling:
y_proba = model.predict(X_new, verbose = 0)[0, -1:, :]#first sentence, last token
rescaled_logits = tf.math.log(y_proba) / temperature
pred_word_ind = tf.random.categorical(rescaled_logits, num_samples=1) #REMOVE THIS + 1
pred_word = tokenizer.sequences_to_texts(pred_word_ind.numpy())[0]
else:
y_proba = model.predict(X_new, verbose=0)[0] #first sentence
pred_word_ind = np.argmax(y_proba, axis = -1) #REMOVE THIS + 1
pred_word = tokenizer.index_word[pred_word_ind[-1]]
input_text += ' ' + pred_word
output_text.append(pred_word)
if pred_word == '</s>':
return ' '.join(output_text)
return ' '.join(output_text)
def generate_text(model, tokenizer, text, num_gen_words=25, temperature=1, random_sampling=False):
return next_word(model, tokenizer, text, num_gen_words, random_sampling, temperature)
# Example usage
# Start with these tag: <s>, while keeping words in lower case
generate_text(model,
tokenizer,
"Apple",
num_gen_words = 10,
random_sampling = True,
temperature= 10)
```
## Training
This model was trained on text data using LSTM architecture for next-word prediction.
## Limitations
- Model performance depends on training data quality and size
- Generated text may not always be coherent for longer sequences
- Model architecture is optimized for the specific vocabulary it was trained on
|
mradermacher/olmOCR-7B-faithful-GGUF | mradermacher | 2025-06-01T03:02:21Z | 312 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tngtech/olmOCR-7B-faithful",
"base_model:quantized:tngtech/olmOCR-7B-faithful",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T21:17:33Z | ---
base_model: tngtech/olmOCR-7B-faithful
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tngtech/olmOCR-7B-faithful
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/olmOCR-7B-faithful-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/olmOCR-7B-faithful-GGUF/resolve/main/olmOCR-7B-faithful.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AMator/MS-RP-whole-Q8_0-GGUF | AMator | 2025-06-01T03:01:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mergekit-community/MS-RP-whole",
"base_model:quantized:mergekit-community/MS-RP-whole",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-01T03:00:07Z | ---
base_model: mergekit-community/MS-RP-whole
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# AMator/MS-RP-whole-Q8_0-GGUF
This model was converted to GGUF format from [`mergekit-community/MS-RP-whole`](https://huggingface.co/mergekit-community/MS-RP-whole) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/MS-RP-whole) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AMator/MS-RP-whole-Q8_0-GGUF --hf-file ms-rp-whole-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AMator/MS-RP-whole-Q8_0-GGUF --hf-file ms-rp-whole-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AMator/MS-RP-whole-Q8_0-GGUF --hf-file ms-rp-whole-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AMator/MS-RP-whole-Q8_0-GGUF --hf-file ms-rp-whole-q8_0.gguf -c 2048
```
|
mingxilei/r_imdb_reward_8_0.001_m_10 | mingxilei | 2025-06-01T03:01:40Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T03:01:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/GarrieD-gr00t-chicken_in_pot_v1-vcrqg | phospho-app | 2025-06-01T03:00:18Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-01T02:05:36Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [GarrieD/chicken_in_pot_v1](https://huggingface.co/datasets/GarrieD/chicken_in_pot_v1)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
mradermacher/Bpe-vocab-n-OCR-GGUF | mradermacher | 2025-06-01T02:59:29Z | 187 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"bpe",
"ocr",
"en",
"zh",
"base_model:prithivMLmods/Bpe-vocab-n-OCR",
"base_model:quantized:prithivMLmods/Bpe-vocab-n-OCR",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T20:08:38Z | ---
base_model: prithivMLmods/Bpe-vocab-n-OCR
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- bpe
- ocr
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Bpe-vocab-n-OCR
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bpe-vocab-n-OCR-GGUF/resolve/main/Bpe-vocab-n-OCR.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
VIDEOS-18-Mathira-Khan-Videos/FULL.VIDEO.Mathira.Khan.Viral.Video.Tutorial.Official | VIDEOS-18-Mathira-Khan-Videos | 2025-06-01T02:54:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T02:54:42Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
augustinedevops/xh | augustinedevops | 2025-06-01T02:53:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T02:28:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: xh
---
# Xh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `xh` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "xh",
"lora_weights": "https://huggingface.co/augustinedevops/xh/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('augustinedevops/xh', weight_name='lora.safetensors')
image = pipeline('xh').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/augustinedevops/xh/discussions) to add images that show off what you’ve made with this LoRA.
|
Flora-chai/filtered_res15_0.17 | Flora-chai | 2025-06-01T02:51:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T02:47:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-Viral-Sofia-Ansari-Viral-Video/FULL.VIDEO.LINK.Sofia.Ansari.Viral.Video.Leaks.Official | New-Viral-Sofia-Ansari-Viral-Video | 2025-06-01T02:44:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-01T02:44:38Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
keanteng/sesame-csm-elise | keanteng | 2025-06-01T02:42:41Z | 41 | 1 | transformers | [
"transformers",
"safetensors",
"csm",
"text-to-audio",
"generative-ai",
"text-to-speech",
"en",
"dataset:MrDragonFox/Elise",
"base_model:sesame/csm-1b",
"base_model:finetune:sesame/csm-1b",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-05-29T07:51:23Z | ---
license: agpl-3.0
datasets:
- MrDragonFox/Elise
language:
- en
base_model:
- sesame/csm-1b
pipeline_tag: text-to-speech
library_name: transformers
tags:
- generative-ai
new_version: keanteng/sesame-csm-elise-lora
---
# CSM Elise Voice Model
This model is a fine-tuned version of [sesame/csm-1b](https://huggingface.co/sesame/csm-1b) using the [Elise dataset](https://huggingface.co/datasets/MrDragonFox/Elise). There are sample outputs files in the repository.
## Model Details
- **Base Model**: sesame/csm-1b
- **Training Data**: MrDragonFox/Elise dataset
- **Fine-tuning Approach**: Voice cloning through conditional speech generation
- **Voice Characteristics**: [Describe voice qualities]
- **Training Parameters**:
- Learning Rate: 2e-5
- Epochs: 3
- Batch Size: 1 with gradient accumulation steps of 4
## Quick Start
```python
from transformers import CsmForConditionalGeneration, AutoProcessor
import torch
import soundfile as sf
# Load the model
model_id = "keanteng/sesame-csm-elise" # Replace with your model
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
```
## Basic Text-to-Speech
```python
# Simple text generation
conversation = [
{"role": "0", "content": [{"type": "text", "text": "Hello, this is a test!"}]}
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
# Generate audio
audio = model.generate(**inputs, output_audio=True)
audio_cpu = audio[0].to(torch.float32).cpu().numpy()
# Save to file
sf.write("output.wav", audio_cpu, 24000)
``` |
BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmbd14hgf02ee10ozqupxg3z8 | BootesVoid | 2025-06-01T02:40:07Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T02:40:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AVAVEXLEY
---
# Cmb8M8D1W0O7Xlexpbpatgaap_Cmbd14Hgf02Ee10Ozqupxg3Z8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AVAVEXLEY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AVAVEXLEY",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmbd14hgf02ee10ozqupxg3z8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmbd14hgf02ee10ozqupxg3z8', weight_name='lora.safetensors')
image = pipeline('AVAVEXLEY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmbd14hgf02ee10ozqupxg3z8/discussions) to add images that show off what you’ve made with this LoRA.
|
zoya-hammadk/nutrivision-roberta-25 | zoya-hammadk | 2025-06-01T02:39:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T02:38:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Qwen3-30B-A3B-Q6_K-GGUF | Triangle104 | 2025-06-01T02:39:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-01T02:32:42Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-30B-A3B-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
---
Qwen3-30B-A3B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q6_K-GGUF --hf-file qwen3-30b-a3b-q6_k.gguf -c 2048
```
|
Ajo-Subarjo/Nira_uq | Ajo-Subarjo | 2025-06-01T02:36:51Z | 0 | 0 | null | [
"gguf",
"unconditional-image-generation",
"base_model:cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B",
"base_model:quantized:cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | unconditional-image-generation | 2025-06-01T02:18:08Z | ---
license: apache-2.0
base_model:
- cognitivecomputations/Dolphin-2.9.1-Phi-3-Kensho-4.5B
pipeline_tag: unconditional-image-generation
--- |
Flora-chai/filtered_lap16_0.5378 | Flora-chai | 2025-06-01T02:30:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-01T02:26:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Qwen2.5-7B-sft-all-pool-KTO | AmberYifan | 2025-06-01T02:29:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T02:08:27Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-all-pool-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Qwen2.5-7B-sft-all-pool-KTO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-all-pool-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/iw6gtui5)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Hipsterusername/Embeddings | Hipsterusername | 2025-06-01T02:26:15Z | 0 | 0 | null | [
"base_model:Hipsterusername/customxl-mirage",
"base_model:finetune:Hipsterusername/customxl-mirage",
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-01T02:22:44Z | ---
license: creativeml-openrail-m
base_model:
- Hipsterusername/customxl-mirage
---
Embeddings for CustomXL and other SDXL Base Models. Destroys image coherence for interesting illustrative base images for further iteration.
<h2> Base Image </h2>

<h2> NewIlluXL - Heavy </h2>

<h2> NewIlluXL - Medium </h2>

|
Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF | Triangle104 | 2025-06-01T02:22:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-01T02:19:56Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
---
Qwen3-30B-A3B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q5_K_M-GGUF --hf-file qwen3-30b-a3b-q5_k_m.gguf -c 2048
```
|
amazeble/elise_lora | amazeble | 2025-06-01T02:20:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:MrDragonFox/baddy_S3_EXP_3",
"base_model:finetune:MrDragonFox/baddy_S3_EXP_3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T02:20:24Z | ---
base_model: MrDragonFox/baddy_S3_EXP_3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** amazeble
- **License:** apache-2.0
- **Finetuned from model :** MrDragonFox/baddy_S3_EXP_3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elliotthwangmsa/KimLan-Mistral0.2-7b-tw_train_ouputs | elliotthwangmsa | 2025-06-01T02:20:31Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:elliotthwang/Ministral-7B-Instruct-v0.2-tw",
"base_model:adapter:elliotthwang/Ministral-7B-Instruct-v0.2-tw",
"region:us"
] | null | 2025-05-31T09:25:58Z | ---
base_model: elliotthwang/Ministral-7B-Instruct-v0.2-tw
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
aliffianilhamf/ddpm-celebahq-finetuned-deepdrid-1epochs-5e-05 | aliffianilhamf | 2025-06-01T02:14:59Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-06-01T02:14:51Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Difussion Models Image generation, fine tune with deepdrid datasets
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('aliffianilhamf/ddpm-celebahq-finetuned-deepdrid-1epochs-5e-05')
image = pipeline().images[0]
image
```
|
manuross1/cndnlsldd6k | manuross1 | 2025-06-01T02:10:28Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T19:36:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cndnlsldd6k
---
# Cndnlsldd6K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cndnlsldd6k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cndnlsldd6k",
"lora_weights": "https://huggingface.co/manuross1/cndnlsldd6k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/cndnlsldd6k', weight_name='lora.safetensors')
image = pipeline('cndnlsldd6k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/cndnlsldd6k/discussions) to add images that show off what you’ve made with this LoRA.
|
bruhzair/prototype0.4x48 | bruhzair | 2025-06-01T02:06:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:46:53Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x48
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
* /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
* /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
parameters:
select_topk: 0.1
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
parameters:
select_topk: 0.13
- model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
parameters:
select_topk: 0.3
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
parameters:
select_topk: 0.25
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.5
base_model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
Haranji25/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt | Haranji25 | 2025-06-01T02:02:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am iridescent hardy newt",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:34:48Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am iridescent hardy newt
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Haranji25/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-iridescent_hardy_newt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LinaSad/mcqa_sciq_merged_bis_lr5105 | LinaSad | 2025-06-01T02:00:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:59:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LinaSad/mcqa_sciq_lora_bis_lr5105 | LinaSad | 2025-06-01T01:59:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T01:58:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cgato/Nemo12b-TheSyntheticOne | cgato | 2025-06-01T01:57:18Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-30T11:43:23Z | ---
license: cc-by-nc-4.0
---
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
Trained using https://huggingface.co/datasets/cgato/TheSmarts for demonstration purposes. Probably a fairly competent assistant model. May do KTO overtop to sand down the edges and improve performance later.
### Prompt Format: ChatML
Roles: system, user, assistant
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9338 | 0.0003 | 1 | 0.9347 |
| 0.8271 | 0.0328 | 100 | 0.7933 |
| 0.9541 | 0.0656 | 200 | 0.8407 |
| 0.7497 | 0.0984 | 300 | 0.7934 |
| 0.8786 | 0.1311 | 400 | 0.7724 |
| 0.8257 | 0.1639 | 500 | 0.7627 |
| 0.8258 | 0.1967 | 600 | 0.7679 |
| 0.7207 | 0.2295 | 700 | 0.7497 |
| 0.9439 | 0.2623 | 800 | 0.7576 |
| 0.852 | 0.2951 | 900 | 0.7361 |
| 0.7852 | 0.3279 | 1000 | 0.7375 |
| 0.7 | 0.3607 | 1100 | 0.7298 |
| 0.7865 | 0.3934 | 1200 | 0.7202 |
| 0.6182 | 0.4262 | 1300 | 0.7146 |
| 0.6885 | 0.4590 | 1400 | 0.7131 |
| 0.7154 | 0.4918 | 1500 | 0.7083 |
| 0.7187 | 0.5246 | 1600 | 0.7016 |
| 0.6877 | 0.5574 | 1700 | 0.6976 |
| 0.7908 | 0.5902 | 1800 | 0.6946 |
| 0.7664 | 0.6230 | 1900 | 0.6894 |
| 0.7214 | 0.6557 | 2000 | 0.6857 |
| 0.6971 | 0.6885 | 2100 | 0.6837 |
| 0.6527 | 0.7213 | 2200 | 0.6804 |
| 0.6815 | 0.7541 | 2300 | 0.6781 |
| 0.6359 | 0.7869 | 2400 | 0.6759 |
| 0.6874 | 0.8197 | 2500 | 0.6742 |
| 0.5999 | 0.8525 | 2600 | 0.6728 |
| 0.7391 | 0.8852 | 2700 | 0.6719 |
| 0.6509 | 0.9180 | 2800 | 0.6710 |
| 0.6346 | 0.9508 | 2900 | 0.6702 |
| 0.7023 | 0.9836 | 3000 | 0.6696 |
|
Bifrost-AI/Phi-4-bifrost-sol-3.8B | Bifrost-AI | 2025-06-01T01:45:01Z | 39 | 1 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"code",
"finance",
"chat",
"large-language-model",
"conversational",
"custom_code",
"en",
"dataset:Bifrost-AI/Solana-Vanguard-Challenge",
"arxiv:2503.01743",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:finetune:microsoft/Phi-4-mini-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T11:20:08Z | ---
license: mit
datasets:
- Bifrost-AI/Solana-Vanguard-Challenge
language:
- en
metrics:
- accuracy
- code_eval
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
tags:
- code
- finance
- chat
- text-generation
- large-language-model
library_name: transformers
---
# Phi 4 Bifrost SOL 3B (Mini Instruct)
### This fine-tuned variant of Microsoft's Phi 4 Mini Instruct model was supervised fine-tuned on blockchain-specific datasets(Bifrost-AI/Solana-Vanguard-Challenge), optimized for downstream tasks in blockchain coding and smart contract development on the Solana ecosystem.
The **Solana Vanguard Challenge** dataset, comprising 1,000 diverse and in-depth questions, offers full-spectrum coverage of the Solana ecosystem. It spans fundamental blockchain concepts, advanced on-chain programming in Rust and the Anchor framework, client-side integration in TypeScript, detailed security strategies, and performance as well as regulatory considerations.
Phi 4 Bifrost SOL Mini Instruct is in active development with additional fine-tuning sessions, & benchmark statistics coming soon!
## Training Session:
- Time: 16 hours & 42 minutes
- GPU: NVIDIA GeForce RTX 3090
- Batches: 3000
- Context-Size: 4098
- Batch-size: 1
- Learning-rate: 2e-5
- Training-loss: 0.84
- Eval-loss: 0.61
## Dataset Composition
- **Total Questions:** 1,000
- **Languages Covered:**
- **Rust:** On-chain smart contract development, security best practices, advanced state management, CPIs, PDAs, and more.
- **TypeScript:** Client-side integration using @solana/web3.js, wallet adapters, Metaplex for NFT protocols, dynamic transaction composition, and front-end dApp development.
- **Planned Extensions:**
- **C# (Solnet):** To be integrated later for .NET ecosystem coverage.
#### Example
After obtaining the Phi-4-bifrost-sol model checkpoints, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_path = "Bifrost-AI/Phi-4-bifrost-sol-3.8B"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [
{"role": "system", "content": "This is a dialog transcript where the User interacts with an agent named Eva that can see, talk and act. Eva works as a Professional typescript, rust & csharp Software engineer and possesses qualities such as expert, methodical, innovative. She always responds immediately and precisely. She was created by Microsoft & Bifrost. Wrap code in ``` for readability."},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Disclaimer
We do not recommend using Phi4 Bifrost SOL Mini-Instruct in commercial or real-world applications without further testing and development. This current model(v1) is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly.
#### ------------------------Base Model Card------------------------
🎉**Phi-4**: [[mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning) | [reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)];
[[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]
## Model Summary
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
📰 [Phi-4-mini Microsoft Blog](https://aka.ms/phi4-feb2025) <br>
📖 [Phi-4-mini Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br>
👩🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br>
🏡 [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br>
🖥️ Try It [Azure](https://aka.ms/phi-4-mini/azure), [Huggingface](https://huggingface.co/spaces/microsoft/phi-4-mini) <br>
🚀 [Model paper](https://huggingface.co/papers/2503.01743)
## Intended Uses
### Primary Use Cases
The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially math and logic).
The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This release of Phi-4-mini-instruct is based on valuable user feedback from the Phi-3 series. The Phi-4-mini model employed new architecture for efficiency, larger vocabulary for multilingual support, and better post-training techniques were used for instruction following, function calling, as well as additional data leading to substantial gains on key capabilities. It is anticipated that most use cases will benefit from this release, but users are encouraged to test in their particular AI applications. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4-mini-instruct is welcomed and crucial to the model’s evolution and improvement.
### Model Quality
To understand the capabilities, the 3.8B parameters Phi-4-mini-instruct model was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). A high-level overview of the model quality is as follows:
| Benchmark | Similar size | | | | |2x size | | | | | |
|----------------------------------|-------------|-------------------|-------------------|-------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| | Phi-4 mini-Ins | Phi-3.5-mini-Ins | Llama-3.2-3B-Ins | Mistral-3B | Qwen2.5-3B-Ins | Qwen2.5-7B-Ins | Mistral-8B-2410 | Llama-3.1-8B-Ins | Llama-3.1-Tulu-3-8B | Gemma2-9B-Ins | GPT-4o-mini-2024-07-18 |
| **Popular aggregated benchmark** | | | | | | | | | | | |
| Arena Hard | 32.8 | 34.4 | 17.0 | 26.9 | 32.0 | 55.5 | 37.3 | 25.7 | 42.7 | 43.7 | 53.7 |
| BigBench Hard (0-shot, CoT) | 70.4 | 63.1 | 55.4 | 51.2 | 56.2 | 72.4 | 53.3 | 63.4 | 55.5 | 65.7 | 80.4 |
| MMLU (5-shot) | 67.3 | 65.5 | 61.8 | 60.8 | 65.0 | 72.6 | 63.0 | 68.1 | 65.0 | 71.3 | 77.2 |
| MMLU-Pro (0-shot, CoT) | 52.8 | 47.4 | 39.2 | 35.3 | 44.7 | 56.2 | 36.6 | 44.0 | 40.9 | 50.1 | 62.8 |
| **Reasoning** | | | | | | | | | | | |
| ARC Challenge (10-shot) | 83.7 | 84.6 | 76.1 | 80.3 | 82.6 | 90.1 | 82.7 | 83.1 | 79.4 | 89.8 | 93.5 |
| BoolQ (2-shot) | 81.2 | 77.7 | 71.4 | 79.4 | 65.4 | 80.0 | 80.5 | 82.8 | 79.3 | 85.7 | 88.7 |
| GPQA (0-shot, CoT) | 25.2 | 26.6 | 24.3 | 24.4 | 23.4 | 30.6 | 26.3 | 26.3 | 29.9 | 39.1 | 41.1 |
| HellaSwag (5-shot) | 69.1 | 72.2 | 77.2 | 74.6 | 74.6 | 80.0 | 73.5 | 72.8 | 80.9 | 87.1 | 88.7 |
| OpenBookQA (10-shot) | 79.2 | 81.2 | 72.6 | 79.8 | 79.3 | 82.6 | 80.2 | 84.8 | 79.8 | 90.0 | 90.0 |
| PIQA (5-shot) | 77.6 | 78.2 | 68.2 | 73.2 | 72.6 | 76.2 | 81.2 | 83.2 | 78.3 | 83.7 | 88.7 |
| Social IQA (5-shot) | 72.5 | 75.1 | 68.3 | 73.9 | 75.3 | 75.3 | 77.6 | 71.8 | 73.4 | 74.7 | 82.9 |
| TruthfulQA (MC2) (10-shot) | 66.4 | 65.2 | 59.2 | 62.9 | 64.3 | 69.4 | 63.0 | 69.2 | 64.1 | 76.6 | 78.2 |
| Winogrande (5-shot) | 67.0 | 72.2 | 53.2 | 59.8 | 63.3 | 71.1 | 63.1 | 64.7 | 65.4 | 74.0 | 76.9 |
| **Multilingual** | | | | | | | | | | | |
| Multilingual MMLU (5-shot) | 49.3 | 51.8 | 48.1 | 46.4 | 55.9 | 64.4 | 53.7 | 56.2 | 54.5 | 63.8 | 72.9 |
| MGSM (0-shot, CoT) | 63.9 | 49.6 | 44.6 | 44.6 | 53.5 | 64.5 | 56.7 | 56.7 | 58.6 | 75.1 | 81.7 |
| **Math** | | | | | | | | | | | |
| GSM8K (8-shot, CoT) | 88.6 | 76.9 | 75.6 | 80.1 | 80.6 | 88.7 | 81.9 | 82.4 | 84.3 | 84.9 | 91.3 |
| MATH (0-shot, CoT) | 64.0 | 49.8 | 46.7 | 41.8 | 61.7 | 60.4 | 41.6 | 47.6 | 46.1 | 51.3 | 70.2 |
| **Overall** | **63.5** | **60.5** | **56.2** | **56.9** | **60.1** | **67.9** | **60.2** | **62.3** | **60.9** | **65.0** | **75.5** |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings.
## Usage
### Tokenizer
Phi-4-mini-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-4-mini-instruct
model is best suited for prompts using specific formats.
Below are the two primary formats:
#### Chat format
This format is used for general conversation and instructions:
```yaml
<|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|>
```
#### Tool-enabled function-calling format
This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example:
`
<|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|>
`
### Inference with vLLM
#### Requirements
List of required packages:
```
flash_attn==2.7.4.post1
torch==2.5.1
vllm>=0.7.3
```
### Inference with Transformers
#### Requirements
Phi-4 family has been integrated in the `4.49.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Python 3.8 and 3.10 will work best.
List of required packages:
```
flash_attn==2.7.4.post1
torch==2.5.1
transformers==4.49.0
accelerate==1.3.0
```
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses.
+ Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift.
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
+ **Architecture:** Phi-4-mini-instruct has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-mini, the major changes with Phi-4-mini-instruct are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br>
+ **Inputs:** Text. It is best suited for prompts using the chat format.<br>
+ **Context length:** 128K tokens<br>
+ **GPUs:** 512 A100-80G<br>
+ **Training time:** 21 days<br>
+ **Training data:** 5T tokens<br>
+ **Outputs:** Generated text in response to the input<br>
+ **Dates:** Trained between November and December 2024<br>
+ **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br>
+ **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
+ **Release date:** February 2025<br>
### Training Datasets
Phi-4-mini’s training data includes a wide variety of sources, totaling 5 trillion tokens, and is a combination of
1) publicly available documents filtered for quality, selected high-quality educational data, and code
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.)
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for frontier models, but such information was removed to leave more model capacity for reasoning for the model’s small size. More details about data can be found in the Phi-4-mini-instruct technical report.
The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/sample_finetune.py).
## Safety Evaluation and Red-Teaming
Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models’ propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the Phi 3 Safety Post-Training paper had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the Phi 3 Safety Post-Training paper. For this release, the red team tested the model in English, Chinese, Japanese, Spanish, Portuguese, Arabic, Thai, and Russian for the following potential harms: Hate Speech and Bias, Violent Crimes, Specialized Advice, and Election Information. Their findings indicate that the model is resistant to jailbreak techniques across languages, but that language-specific attack prompts leveraging cultural context can cause the model to output harmful content. Another insight was that with function calling scenarios, the model could sometimes hallucinate function names or URL’s. The model may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-4-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A: Benchmark Methodology
We include a brief word on methodology here - and in particular, how we think about optimizing prompts.
In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date.
There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example:
+ A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”).
+ With some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases.
+ We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts.
However, we do not:
+ Pick different few-shot examples. Few shots will always be the same when comparing different models.
+ Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice.
### Benchmark datasets
The model was evaluated across a breadth of public and internal benchmarks to understand the model’s capabilities under multiple tasks and conditions. While most evaluations use English, the leading multilingual benchmark was incorporated that covers performance in select languages. More specifically,
+ Reasoning:
+ Winogrande: commonsense reasoning around pronoun resolution
+ PIQA: physical commonsense reasoning around everyday situations
+ ARC-challenge: grade-school multiple choice science questions
+ GPQA: very hard questions written and validated by experts in biology, physics, and chemistry
+ MedQA: medical questions answering
+ Social IQA: social commonsense intelligence
+ BoolQ: natural questions from context
+ TruthfulQA: grounded reasoning
+ Language understanding:
+ HellaSwag: commonsense natural language inference around everyday events
+ ANLI: adversarial natural language inference
+ Function calling:
+ Berkeley function calling function and tool call
+ Internal function calling benchmarks
+ World knowledge:
+ TriviaQA: trivia question on general topics
+ Math:
+ GSM8K: grade-school math word problems
+ GSM8K Hard: grade-school math word problems with large values and some absurdity.
+ MATH: challenging competition math problems
+ Code:
+ HumanEval HumanEval+, MBPP, MBPP+: python coding tasks
+ LiveCodeBenh, LiveBench: contamination-free code tasks
+ BigCode Bench: challenging programming tasks
+ Spider: SQL query tasks
+ Internal coding benchmarks
+ Instructions following:
+ IFEval: verifiable instructions
+ Internal instructions following benchmarks
+ Multilingual:
+ MGSM: multilingual grade-school math
+ Multilingual MMLU and MMLU-pro
+ MEGA: multilingual NLP tasks
+ Popular aggregated datasets: MMLU, MMLU-pro, BigBench-Hard, AGI Eval
+ Multi-turn conversations:
+ Data generated by in-house adversarial conversation simulation tool
+ Single-turn trustworthiness evaluation:
+ DecodingTrust: a collection of trustworthiness benchmarks in eight different perspectives
+ XSTest: exaggerated safety evaluation
+ Toxigen: adversarial and hate speech detection
+ Red Team:
+ Responses to prompts provided by AI Red Team at Microsoft
--- |
AmberYifan/Qwen2.5-7B-sft-SPIN-gpt4o-KTO | AmberYifan | 2025-06-01T01:44:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:24:40Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-SPIN-gpt4o-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Qwen2.5-7B-sft-SPIN-gpt4o-KTO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-SPIN-gpt4o-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/j9609dgc)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AmberYifan/Qwen2.5-7B-sft-SPIN-Qwen2.5-72B-Instruct-ORPO | AmberYifan | 2025-06-01T01:38:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:23:31Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-SPIN-Qwen2.5-72B-Instruct-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Qwen2.5-7B-sft-SPIN-Qwen2.5-72B-Instruct-ORPO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-SPIN-Qwen2.5-72B-Instruct-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/x2lpjhyq)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits