modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 00:40:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 00:36:54
card
stringlengths
11
1.01M
ywiyogo/q-Taxi-v3
ywiyogo
2025-05-03T07:52:51Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T07:52:49Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ywiyogo/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
grapevine-AI/Qwen3-30B-A3B-GGUF
grapevine-AI
2025-05-03T07:46:48Z
0
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T07:14:33Z
--- license: apache-2.0 --- # What is this? Alibaba Cloudの思考/非思考ハイブリッドMoEモデル、[Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)を日本語imatrixで量子化したものです。 # imatrix dataset 日本語能力を重視し、日本語が多量に含まれる[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)データセットを使用しました。 # Chat template ``` <|im_start|>system ここにSystem Promptを書きます。<|im_end|> <|im_start|>user ここにMessageを書きます。<|im_end|> <|im_start|>assistant ``` # Quants 各クオンツとそのベンチマークスコア(Gemini 2.0 Flash採点によるElyza_tasks 100)をまとめておきます。 - 思考あり |クオンツ|スコア|コメント| |---|---|---| |Q8_0|4.41|| |Q6_K|4.44|| |Q5_K_M|4.46|推奨| |Q4_K_M|4.44|| |IQ4_XS|4.43|| - 思考なし |クオンツ|スコア|コメント| |---|---|---| |Q8_0|4.06|| |Q6_K|4.09|| |Q5_K_M|4.18|推奨| |Q4_K_M|4.07|| |IQ4_XS|3.98|| # Environment Windows版llama.cpp-b5218および同時リリースのconvert-hf-to-gguf.pyを使用して量子化作業を実施しました。 # License Apache 2.0 # Developer Alibaba Cloud
sommerzen/qwemani-3-4b_v2
sommerzen
2025-05-03T07:46:13Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T07:40:25Z
--- base_model: unsloth/qwen3-4b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sommerzen - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF
mradermacher
2025-05-03T07:44:49Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564", "base_model:quantized:fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564", "endpoints_compatible", "region:us", "imatrix", "feature-extraction" ]
null
2025-05-03T07:41:41Z
--- base_model: fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF
mradermacher
2025-05-03T07:43:39Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564", "base_model:quantized:fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-05-03T07:41:17Z
--- base_model: fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/fine-tuned/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q2_K.gguf) | Q2_K | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q3_K_S.gguf) | Q3_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.IQ4_XS.gguf) | IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q3_K_L.gguf) | Q3_K_L | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q5_K_S.gguf) | Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q5_K_M.gguf) | Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q6_K.gguf) | Q6_K | 0.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564-GGUF/resolve/main/medical-100-64-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo_9062874564.f16.gguf) | f16 | 0.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
fasasimounpedsf/SDVDFVB
fasasimounpedsf
2025-05-03T07:43:36Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-03T07:43:36Z
--- license: bigscience-openrail-m ---
mradermacher/II-Medical-7B-Preview-i1-GGUF
mradermacher
2025-05-03T07:39:58Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Intelligent-Internet/II-Medical-7B-Preview", "base_model:quantized:Intelligent-Internet/II-Medical-7B-Preview", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T00:05:03Z
--- base_model: Intelligent-Internet/II-Medical-7B-Preview language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Intelligent-Internet/II-Medical-7B-Preview <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/II-Medical-7B-Preview-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/II-Medical-7B-Preview-i1-GGUF/resolve/main/II-Medical-7B-Preview.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ponytail/Face-LLaVA_Qwen2.5-3B
ponytail
2025-05-03T07:38:08Z
0
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "AIGC", "LLaVA", "visual-question-answering", "dataset:OpenFace-CQUPT/FaceCaption-15M", "arxiv:2411.03034", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "endpoints_compatible", "region:us" ]
visual-question-answering
2025-05-03T05:02:30Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct library_name: transformers tags: - AIGC - LLaVA datasets: - OpenFace-CQUPT/FaceCaption-15M metrics: - accuracy pipeline_tag: visual-question-answering --- # Human-LLaVA-8B ## DEMO <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/TpN2t19Poe5YbHHP8uN7_.mp4"></video> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/1xS27bvECvGTKntvOa1SQ.png) ### Introduction Human-related vision and language tasks are widely applied across various social scenarios. The latest studies demonstrate that the large vision-language model can enhance the performance of various downstream tasks in visual-language understanding. Since, models in the general domain often not perform well in the specialized field. In this study, we train a domain-specific Large Language-Vision model, Human-LLaVA, which aim to construct an unified multimodal Language-Vision Model for Human-related tasks. Specifically, (1) we first construct **a large-scale and high-quality human-related image-text (caption) dataset** extracted from Internet for domain-specific alignment in the first stage (Coming soon); (2) we also propose to construct **a multi-granularity caption for human-related images** (Coming soon), including human face, human body, and whole image, thereby fine-tuning a large language model. Lastly, we evaluate our model on a series of downstream tasks, our **Human-LLaVA** achieved the best overall performance among multimodal models of similar scale. In particular, it exhibits the best performance in a series of human-related tasks, significantly surpassing similar models and ChatGPT-4o. We believe that the Huaman-LLaVA model and a series of datasets presented in this work can promote research in related fields. ## Result human-llava has a good performance in both general and special fields ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/X-712oVUBPXbfLcAz83fb.png) ## News and Update 🔥🔥🔥 * Oct.23, 2024. **🤗[HumanCaption-HQ-311K](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K), is released!👏👏👏** * Sep.12, 2024. **🤗[HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M), is released!👏👏👏** * Sep.8, 2024. **🤗[HumanVLM](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!👏👏👏** ## 🤗 Transformers To use Human-LLaVA for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, please make sure that you are using latest code. ``` python import requests from PIL import Image import torch from transformers import AutoProcessor, AutoModelForPreTraining model_id = "OpenFace-CQUPT/Human_LLaVA" cuda = 0 model = AutoModelForPreTraining.from_pretrained("OpenFace-CQUPT/Human_LLaVA", torch_dtype=torch.float16).to(cuda) processor = AutoProcessor.from_pretrained(model_id,trust_remote_code=True) text = "Please describe this picture" prompt = "USER: <image>\n" + text + "\nASSISTANT:" image_file = "./test1.jpg" raw_image = Image.open(image_file) # raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to(cuda, torch.float16) output = model.generate(**inputs, max_new_tokens=400, do_sample=False) predict = processor.decode(output[0][:], skip_special_tokens=True) print(predict) ``` Our training code have been released publicly on github.[ddw2AIGROUP2CQUPT/Human-LLaVA-8B(github.com)](https://github.com/ddw2AIGROUP2CQUPT/Human-LLaVA-8B) ## Get the Dataset #### Dataset Example ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/-gTV7ym_gmNmJqNRDzlCx.png) #### Domain Alignment Stage [HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M)(self construct): is released! #### Instruction Tuning Stage **All public data sets have been filtered, and we will consider publishing all processed text in the future** [HumanCaption-HQ](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K)(self construct): is released! [FaceCaptionA](https://huggingface.co/datasets/OpenFace-CQUPT/FaceCaption-15M)(self construct): is released! CelebA: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html ShareGPT4V:https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md LLaVA-Instruct_zh : https://huggingface.co/datasets/openbmb/llava_zh verified_ref3rec: https://huggingface.co/datasets/lucasjin/refcoco/blob/main/ref3rec.json verified_ref3reg: https://huggingface.co/datasets/lucasjin/refcoco/blob/main/ref3rec.json verified_shikra: https://github.com/shikras/shikra ## Citation ``` @misc{dai2024humanvlmfoundationhumanscenevisionlanguage, title={HumanVLM: Foundation for Human-Scene Vision-Language Model}, author={Dawei Dai and Xu Long and Li Yutang and Zhang Yuanhui and Shuyin Xia}, year={2024}, eprint={2411.03034}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2411.03034}, } ``` ## contact mailto: [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected])
bartalex31/mlmodel
bartalex31
2025-05-03T07:33:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T07:33:19Z
--- license: apache-2.0 ---
Banki42/model
Banki42
2025-05-03T07:20:37Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "base_model:quantized:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T07:19:28Z
--- base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Banki42 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Hachipo/Qwen2.5-7B-MIFT-en_10000_2
Hachipo
2025-05-03T07:16:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T07:12:49Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gtkunit/Qwen3-235B-A22B-2.0bpw-h6-exl2
gtkunit
2025-05-03T07:14:44Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T07:14:44Z
--- license: apache-2.0 ---
jahyungu/Llama-3.2-1B-Instruct_MetaMathQA-40K_cluster9
jahyungu
2025-05-03T07:14:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T05:31:32Z
--- library_name: transformers license: llama3.2 base_model: meta-llama/Llama-3.2-1B-Instruct tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-Instruct_MetaMathQA-40K_cluster9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-Instruct_MetaMathQA-40K_cluster9 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
isbistloui/math-llama-aiml428-a2
isbistloui
2025-05-03T07:06:43Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T07:06:28Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF
mradermacher
2025-05-03T07:06:22Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Mumamonster/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical", "base_model:quantized:Mumamonster/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T06:34:58Z
--- base_model: Mumamonster/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Mumamonster/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical-i1-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-Distill_all_putonghua_medical.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
bartowski/kalomaze_Qwen3-16B-A3B-GGUF
bartowski
2025-05-03T06:48:46Z
0
2
null
[ "gguf", "text-generation", "base_model:kalomaze/Qwen3-16B-A3B", "base_model:quantized:kalomaze/Qwen3-16B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-03T05:14:31Z
--- quantized_by: bartowski pipeline_tag: text-generation license: apache-2.0 base_model_relation: quantized base_model: kalomaze/Qwen3-16B-A3B --- ## Llamacpp imatrix Quantizations of Qwen3-16B-A3B by kalomaze Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5255">b5255</a> for quantization. Original model: https://huggingface.co/kalomaze/Qwen3-16B-A3B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [Qwen3-16B-A3B-bf16.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-bf16.gguf) | bf16 | 32.08GB | false | Full BF16 weights. | | [Qwen3-16B-A3B-Q8_0.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q8_0.gguf) | Q8_0 | 17.06GB | false | Extremely high quality, generally unneeded but max available quant. | | [Qwen3-16B-A3B-Q6_K_L.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q6_K_L.gguf) | Q6_K_L | 13.34GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [Qwen3-16B-A3B-Q6_K.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q6_K.gguf) | Q6_K | 13.19GB | false | Very high quality, near perfect, *recommended*. | | [Qwen3-16B-A3B-Q5_K_L.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q5_K_L.gguf) | Q5_K_L | 11.62GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [Qwen3-16B-A3B-Q5_K_M.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q5_K_M.gguf) | Q5_K_M | 11.43GB | false | High quality, *recommended*. | | [Qwen3-16B-A3B-Q5_K_S.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q5_K_S.gguf) | Q5_K_S | 11.11GB | false | High quality, *recommended*. | | [Qwen3-16B-A3B-Q4_1.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q4_1.gguf) | Q4_1 | 10.13GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. | | [Qwen3-16B-A3B-Q4_K_L.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q4_K_L.gguf) | Q4_K_L | 10.06GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [Qwen3-16B-A3B-Q4_K_M.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q4_K_M.gguf) | Q4_K_M | 9.83GB | false | Good quality, default size for most use cases, *recommended*. | | [Qwen3-16B-A3B-Q4_K_S.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q4_K_S.gguf) | Q4_K_S | 9.50GB | false | Slightly lower quality with more space savings, *recommended*. | | [Qwen3-16B-A3B-Q4_0.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q4_0.gguf) | Q4_0 | 9.30GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. | | [Qwen3-16B-A3B-IQ4_NL.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ4_NL.gguf) | IQ4_NL | 9.21GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. | | [Qwen3-16B-A3B-IQ4_XS.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ4_XS.gguf) | IQ4_XS | 8.73GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Qwen3-16B-A3B-Q3_K_XL.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q3_K_XL.gguf) | Q3_K_XL | 7.98GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [Qwen3-16B-A3B-Q3_K_L.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q3_K_L.gguf) | Q3_K_L | 7.71GB | false | Lower quality but usable, good for low RAM availability. | | [Qwen3-16B-A3B-Q3_K_M.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q3_K_M.gguf) | Q3_K_M | 7.50GB | false | Low quality. | | [Qwen3-16B-A3B-IQ3_M.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ3_M.gguf) | IQ3_M | 7.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Qwen3-16B-A3B-Q3_K_S.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q3_K_S.gguf) | Q3_K_S | 7.17GB | false | Low quality, not recommended. | | [Qwen3-16B-A3B-IQ3_XS.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ3_XS.gguf) | IQ3_XS | 6.82GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Qwen3-16B-A3B-IQ3_XXS.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ3_XXS.gguf) | IQ3_XXS | 6.53GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Qwen3-16B-A3B-Q2_K_L.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q2_K_L.gguf) | Q2_K_L | 6.19GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [Qwen3-16B-A3B-Q2_K.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-Q2_K.gguf) | Q2_K | 5.88GB | false | Very low quality but surprisingly usable. | | [Qwen3-16B-A3B-IQ2_M.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ2_M.gguf) | IQ2_M | 5.62GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [Qwen3-16B-A3B-IQ2_S.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ2_S.gguf) | IQ2_S | 5.01GB | false | Low quality, uses SOTA techniques to be usable. | | [Qwen3-16B-A3B-IQ2_XS.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ2_XS.gguf) | IQ2_XS | 4.93GB | false | Low quality, uses SOTA techniques to be usable. | | [Qwen3-16B-A3B-IQ2_XXS.gguf](https://huggingface.co/bartowski/kalomaze_Qwen3-16B-A3B-GGUF/blob/main/kalomaze_Qwen3-16B-A3B-IQ2_XXS.gguf) | IQ2_XXS | 4.43GB | false | Very low quality, uses SOTA techniques to be usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/kalomaze_Qwen3-16B-A3B-GGUF --include "kalomaze_Qwen3-16B-A3B-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/kalomaze_Qwen3-16B-A3B-GGUF --include "kalomaze_Qwen3-16B-A3B-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (kalomaze_Qwen3-16B-A3B-Q8_0) or download them all in place (./) </details> ## ARM/AVX information Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass. Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly. As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0. Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase. <details> <summary>Click to view Q4_0_X_X information (deprecated</summary> I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking. <details> <summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary> | model | size | params | backend | threads | test | t/s | % (vs Q4_0) | | ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% | Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation </details> </details> ## Which file should I choose? <details> <summary>Click here for details</summary> A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. </details> ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset. Thank you ZeroWw for the inspiration to experiment with embed/output. Thank you to LM Studio for sponsoring my work. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
prithivMLmods/Omni-Reasoner-2B
prithivMLmods
2025-05-03T06:47:22Z
9
4
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "text-generation-inference", "Omni", "Math", "Reasoner", "Qwen-Base", "conversational", "en", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-01-16T22:46:47Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2-VL-2B-Instruct pipeline_tag: image-text-to-text library_name: transformers tags: - text-generation-inference - Omni - Math - Reasoner - Qwen-Base --- # **Omni-Reasoner-2B [VL/ Doc OCR]** ![ADS.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/GJGP5Qvo7ew-ZQbwb3Hp9.png) *Omni-Reasoner-2B* is based on Qwen2VL and is designed for mathematical and content-based explanations. It excels in providing detailed reasoning about content and solving math problems with proper content formatting. This model integrates a conversational approach with visual and textual understanding to handle multi-modal tasks effectively. # **Use it with Transformers** *Before using, ensure that the required libraries are successfully installed in the environment.* !pip install gradio spaces transformers accelerate numpy requests torch torchvision qwen-vl-utils av ipython reportlab fpdf python-docx pillow huggingface_hub *ChemQwen With Inference Documentation, **Before using, make sure that the `hf_token` is provided in the login field in the code below.*** # **Sample Inference with Doc** ![omnip.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/53LkSMzAkIl1Yxc2yfLwb.png) 📒*Demo:* https://huggingface.co/prithivMLmods/Omni-Reasoner-2B/blob/main/Omni-R/omni-r.ipynb ```python # Authenticate with Hugging Face from huggingface_hub import login # Log in to Hugging Face using the provided token hf_token = '----xxxxx----' login(hf_token) # Demo import gradio as gr import spaces from transformers import Qwen2VLForConditionalGeneration, AutoProcessor, TextIteratorStreamer from qwen_vl_utils import process_vision_info import torch from PIL import Image import os import uuid import io from threading import Thread from reportlab.lib.pagesizes import A4 from reportlab.lib.styles import getSampleStyleSheet from reportlab.lib import colors from reportlab.platypus import SimpleDocTemplate, Image as RLImage, Paragraph, Spacer from reportlab.pdfbase import pdfmetrics from reportlab.pdfbase.ttfonts import TTFont import docx from docx.enum.text import WD_ALIGN_PARAGRAPH # Define model options MODEL_OPTIONS = { "Omni-Reasoner": "prithivMLmods/Omni-Reasoner-2B", } # Preload models and processors into CUDA models = {} processors = {} for name, model_id in MODEL_OPTIONS.items(): print(f"Loading {name}...") models[name] = Qwen2VLForConditionalGeneration.from_pretrained( model_id, trust_remote_code=True, torch_dtype=torch.float16 ).to("cuda").eval() processors[name] = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) image_extensions = Image.registered_extensions() def identify_and_save_blob(blob_path): """Identifies if the blob is an image and saves it.""" try: with open(blob_path, 'rb') as file: blob_content = file.read() try: Image.open(io.BytesIO(blob_content)).verify() # Check if it's a valid image extension = ".png" # Default to PNG for saving media_type = "image" except (IOError, SyntaxError): raise ValueError("Unsupported media type. Please upload a valid image.") filename = f"temp_{uuid.uuid4()}_media{extension}" with open(filename, "wb") as f: f.write(blob_content) return filename, media_type except FileNotFoundError: raise ValueError(f"The file {blob_path} was not found.") except Exception as e: raise ValueError(f"An error occurred while processing the file: {e}") @spaces.GPU def qwen_inference(model_name, media_input, text_input=None): """Handles inference for the selected model.""" model = models[model_name] processor = processors[model_name] if isinstance(media_input, str): media_path = media_input if media_path.endswith(tuple([i for i in image_extensions.keys()])): media_type = "image" else: try: media_path, media_type = identify_and_save_blob(media_input) except Exception as e: raise ValueError("Unsupported media type. Please upload a valid image.") messages = [ { "role": "user", "content": [ { "type": media_type, media_type: media_path }, {"type": "text", "text": text_input}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, _ = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, padding=True, return_tensors="pt", ).to("cuda") streamer = TextIteratorStreamer( processor.tokenizer, skip_prompt=True, skip_special_tokens=True ) generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=1024) thread = Thread(target=model.generate, kwargs=generation_kwargs) thread.start() buffer = "" for new_text in streamer: buffer += new_text # Remove <|im_end|> or similar tokens from the output buffer = buffer.replace("<|im_end|>", "") yield buffer def format_plain_text(output_text): """Formats the output text as plain text without LaTeX delimiters.""" # Remove LaTeX delimiters and convert to plain text plain_text = output_text.replace("\\(", "").replace("\\)", "").replace("\\[", "").replace("\\]", "") return plain_text def generate_document(media_path, output_text, file_format, font_size, line_spacing, alignment, image_size): """Generates a document with the input image and plain text output.""" plain_text = format_plain_text(output_text) if file_format == "pdf": return generate_pdf(media_path, plain_text, font_size, line_spacing, alignment, image_size) elif file_format == "docx": return generate_docx(media_path, plain_text, font_size, line_spacing, alignment, image_size) def generate_pdf(media_path, plain_text, font_size, line_spacing, alignment, image_size): """Generates a PDF document.""" filename = f"output_{uuid.uuid4()}.pdf" doc = SimpleDocTemplate( filename, pagesize=A4, rightMargin=inch, leftMargin=inch, topMargin=inch, bottomMargin=inch ) styles = getSampleStyleSheet() styles["Normal"].fontSize = int(font_size) styles["Normal"].leading = int(font_size) * line_spacing styles["Normal"].alignment = { "Left": 0, "Center": 1, "Right": 2, "Justified": 4 }[alignment] story = [] # Add image with size adjustment image_sizes = { "Small": (200, 200), "Medium": (400, 400), "Large": (600, 600) } img = RLImage(media_path, width=image_sizes[image_size][0], height=image_sizes[image_size][1]) story.append(img) story.append(Spacer(1, 12)) # Add plain text output text = Paragraph(plain_text, styles["Normal"]) story.append(text) doc.build(story) return filename def generate_docx(media_path, plain_text, font_size, line_spacing, alignment, image_size): """Generates a DOCX document.""" filename = f"output_{uuid.uuid4()}.docx" doc = docx.Document() # Add image with size adjustment image_sizes = { "Small": docx.shared.Inches(2), "Medium": docx.shared.Inches(4), "Large": docx.shared.Inches(6) } doc.add_picture(media_path, width=image_sizes[image_size]) doc.add_paragraph() # Add plain text output paragraph = doc.add_paragraph() paragraph.paragraph_format.line_spacing = line_spacing paragraph.paragraph_format.alignment = { "Left": WD_ALIGN_PARAGRAPH.LEFT, "Center": WD_ALIGN_PARAGRAPH.CENTER, "Right": WD_ALIGN_PARAGRAPH.RIGHT, "Justified": WD_ALIGN_PARAGRAPH.JUSTIFY }[alignment] run = paragraph.add_run(plain_text) run.font.size = docx.shared.Pt(int(font_size)) doc.save(filename) return filename # CSS for output styling css = """ #output { height: 500px; overflow: auto; border: 1px solid #ccc; } .submit-btn { background-color: #cf3434 !important; color: white !important; } .submit-btn:hover { background-color: #ff2323 !important; } .download-btn { background-color: #35a6d6 !important; color: white !important; } .download-btn:hover { background-color: #22bcff !important; } """ # Gradio app setup with gr.Blocks(css=css) as demo: gr.Markdown("# ChemQwen Chemical Identifier") with gr.Tab(label="Image Input"): with gr.Row(): with gr.Column(): model_choice = gr.Dropdown( label="Model Selection", choices=list(MODEL_OPTIONS.keys()), value="Omni-Reasoner" ) input_media = gr.File( label="Upload Image", type="filepath" ) text_input = gr.Textbox(label="Question", placeholder="Ask a question about the image...") submit_btn = gr.Button(value="Submit", elem_classes="submit-btn") with gr.Column(): output_text = gr.Textbox(label="Output Text", lines=10) plain_text_output = gr.Textbox(label="Standardized Plain Text", lines=10) submit_btn.click( qwen_inference, [model_choice, input_media, text_input], [output_text] ).then( lambda output_text: format_plain_text(output_text), [output_text], [plain_text_output] ) # Add examples directly usable by clicking with gr.Row(): with gr.Column(): line_spacing = gr.Dropdown( choices=[0.5, 1.0, 1.15, 1.5, 2.0, 2.5, 3.0], value=1.5, label="Line Spacing" ) font_size = gr.Dropdown( choices=["8", "10", "12", "14", "16", "18", "20", "22", "24"], value="18", label="Font Size" ) alignment = gr.Dropdown( choices=["Left", "Center", "Right", "Justified"], value="Justified", label="Text Alignment" ) image_size = gr.Dropdown( choices=["Small", "Medium", "Large"], value="Small", label="Image Size" ) file_format = gr.Radio(["pdf", "docx"], label="File Format", value="pdf") get_document_btn = gr.Button(value="Get Document", elem_classes="download-btn") get_document_btn.click( generate_document, [input_media, output_text, file_format, font_size, line_spacing, alignment, image_size], gr.File(label="Download Document") ) demo.launch(debug=True) ``` # **Key Enhancements** 1. **Advanced Reasoning Capabilities**: - Enhanced ability to perform long-form reasoning for complex mathematical and content-based queries. - Supports detailed step-by-step explanations for problem-solving and content formatting. 2. **Multi-Modal Integration**: - Combines visual and textual understanding to interpret and analyze diverse input formats (images, text, and mathematical expressions). 3. **Conversational Workflow**: - Offers a natural conversational interface for interactive problem-solving and explanations. 4. **Content Formatting**: - Improves content presentation with structured formatting for better readability and understanding. # **Intended Use** 1. **Educational Assistance**: - Ideal for students and educators for solving mathematical problems, creating structured explanations, and formatting educational content. 2. **Research Support**: - Assists researchers in generating in-depth explanations and interpreting complex visual and textual data. 3. **Content Creation**: - Enhances the generation of well-formatted documents, reports, and presentations. 4. **General Purpose Assistance**: - Useful for applications requiring long-form reasoning and conversational AI in domains like tutoring, customer support, and technical writing. # **Limitations** 1. **Domain-Specific Expertise**: - May struggle with niche or highly specialized topics outside its training domain. 2. **Error in Long-Chain Reasoning**: - In rare cases, it might generate incorrect or inconsistent solutions for highly complex problems. 3. **Visual Data Limitations**: - Performance may depend on the quality and clarity of visual inputs (e.g., low-resolution images may reduce accuracy). 4. **Formatting Constraints**: - While effective, complex or heavily customized formatting tasks may require manual adjustments. 5. **Dependence on Context**: - The model relies on well-structured input to produce accurate and coherent outputs; ambiguous or incomplete prompts may lead to suboptimal results.
OpenMOSE/PRWKV-7-Qwen3-Preview-v0.1
OpenMOSE
2025-05-03T06:44:16Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T19:11:42Z
--- license: apache-2.0 --- # **Model Card: PRWKV-7-Qwen3-14B-Preview-v0.1** ### **Overview** - **Model Name:** PRWKV-7-Qwen3-14B-Preview-v0.1 - **Base Model:** Qwen3 14B (Instruct) - **Architecture:** RWKV Cxa076r (RWKV x070 Based) + SwiGLU - **Parameter Count:** 14 Billion - **Context Length:** 3072 - **Training Tokens:** - Stage 1: 100 Million Tokens - Stage 2: 200 Million Tokens This model is part of an experimental effort to *replace Transformer-style attention with a fully recurrent RWKV-based architecture*. It uses a customized version of the RWKV TimeMix block (`Cxa076r`) with SwiGLU activation, applied to a 14B-scale model derived from Qwen3. --- ### **Motivation** The goal of this project is to explore whether an RNN-style model such as RWKV can faithfully mimic the output and reasoning behavior of large Transformer-based LLMs like Qwen3, while retaining the benefits of linear compute cost and persistent memory. Replacing attention with TimeMix was not a trivial task. Qwen3 is heavily optimized for attention-based flow, including grouped-query attention (GQA) and Rotary Positional Embeddings (RoPE). To bridge the architecture gap, we introduced novel gating structures, careful initialization alignment, and staged distillation involving both token-level and hidden-state mimicry. --- ### **Challenges Faced** - **Stability in Early Training:** Unlike Transformer models, RWKV's state dynamics require careful gating and normalization. Without it, token dropout or state explosion frequently occurred during warm-up. - **Cross-Architecture Distillation:** Aligning a recurrent architecture with a feed-forward Transformer introduced step-wise divergence, especially in conversational jumps. Custom loss functions were employed to match hidden trajectories and long-term behavior, not just per-token outputs. - **Context Sensitivity:** Increasing context length beyond 2048 revealed stability cliffs. Careful adjustment of temporal decay, positional mixing, and memory routing was necessary to reach 3072 tokens reliably. --- ### **Current Limitations** This is a *preview* version. The model is capable of coherent generation, especially in long-form settings, but may still show deviations in precision-demanding tasks or rare contexts. Prompt injection robustness and RLHF alignment are future work. --- ### **License & Usage** This model is intended for **research and experimentation only**. Please consult the licensing terms of Qwen3 and RWKV if you intend to use this model commercially or fine-tune it. --- ### **Poem – The Cost of Curiosity** > Countless times we failed— > A ghost in the gradients, > A silence in the state. > > Attention was easy. > But ease never leads to breakthrough. > > We drank too much coffee. > Slept too little. > > And somewhere between the hallucinations, > The loss spikes, > And the whispered curses at 3am— > > A new mind was born. > PRWKV-7 lives. --- 2025 OpenMOSE https://x.com/_m0se_
fats-fme/e7022a06-8423-4490-9934-13f3adf6b973
fats-fme
2025-05-03T06:43:18Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-05-03T06:15:57Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: e7022a06-8423-4490-9934-13f3adf6b973 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 576d05a318e5dd05_train_data.json ds_type: json format: custom path: /workspace/input_data/576d05a318e5dd05_train_data.json type: field_instruction: problem field_output: reasoning_solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/e7022a06-8423-4490-9934-13f3adf6b973 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 130GB max_steps: 50 micro_batch_size: 1 mlflow_experiment_name: /tmp/576d05a318e5dd05_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c571c6f1-2bf0-403b-8085-e6a964a4f9c8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c571c6f1-2bf0-403b-8085-e6a964a4f9c8 warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # e7022a06-8423-4490-9934-13f3adf6b973 This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.9034 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
aleegis/7b58edbb-d88d-4565-b281-4eff324bc672
aleegis
2025-05-03T06:38:39Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "region:us" ]
null
2025-05-03T05:17:43Z
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-6.7b-instruct tags: - axolotl - generated_from_trainer model-index: - name: 7b58edbb-d88d-4565-b281-4eff324bc672 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: deepseek-ai/deepseek-coder-6.7b-instruct bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 79ae7482d8ea96ee_train_data.json ds_type: json format: custom path: /workspace/input_data/79ae7482d8ea96ee_train_data.json type: field_instruction: text field_output: completion_a format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/7b58edbb-d88d-4565-b281-4eff324bc672 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/79ae7482d8ea96ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: f234d8d9-7843-44ae-80fb-4dccf66214cc wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f234d8d9-7843-44ae-80fb-4dccf66214cc warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # 7b58edbb-d88d-4565-b281-4eff324bc672 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Saptarshi1234/starcoder2-3b-finetuned
Saptarshi1234
2025-05-03T06:36:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T06:35:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fivedoctors/ppo-SnowballTarget
fivedoctors
2025-05-03T06:31:47Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-05-03T06:09:23Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: fivedoctors/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF
mradermacher
2025-05-03T06:29:26Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:sequelbox/Qwen3-8B-Esper3-PREVIEW", "base_model:quantized:sequelbox/Qwen3-8B-Esper3-PREVIEW", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T21:37:04Z
--- base_model: sequelbox/Qwen3-8B-Esper3-PREVIEW language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/sequelbox/Qwen3-8B-Esper3-PREVIEW <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q5_K_M.gguf) | Q5_K_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Esper3-PREVIEW-GGUF/resolve/main/Qwen3-8B-Esper3-PREVIEW.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Alphatao/4e0d509f-42a3-4378-85fc-ef2fb7c82f27
Alphatao
2025-05-03T06:18:01Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gemma", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/codegemma-7b-it", "base_model:finetune:unsloth/codegemma-7b-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T05:25:34Z
--- base_model: unsloth/codegemma-7b-it library_name: transformers model_name: 4e0d509f-42a3-4378-85fc-ef2fb7c82f27 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 4e0d509f-42a3-4378-85fc-ef2fb7c82f27 This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alphatao/4e0d509f-42a3-4378-85fc-ef2fb7c82f27", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/pcqut8h9) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
esuna/chelsea-minimal
esuna
2025-05-03T06:10:01Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T06:10:00Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # chelsea-minimal A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words No trigger words defined. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
TOMFORD79/Fly35
TOMFORD79
2025-05-03T06:09:35Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-03T05:46:54Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
alin13/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_moist_ocelot
alin13
2025-05-03T06:09:02Z
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am woolly moist ocelot", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T10:18:03Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_moist_ocelot tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am woolly moist ocelot - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_moist_ocelot This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alin13/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_moist_ocelot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
shibajustfor/3345c74b-ab00-4a6f-ad04-4478968f921e
shibajustfor
2025-05-03T06:06:08Z
0
0
transformers
[ "transformers", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-05-03T06:05:23Z
--- library_name: transformers model_name: shibajustfor/3345c74b-ab00-4a6f-ad04-4478968f921e tags: - generated_from_trainer licence: license --- # Model Card for shibajustfor/3345c74b-ab00-4a6f-ad04-4478968f921e This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sanabar/roberta-goemo-journals
sanabar
2025-05-03T06:05:41Z
65
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:SamLowe/roberta-base-go_emotions", "base_model:finetune:SamLowe/roberta-base-go_emotions", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-17T00:19:52Z
--- library_name: transformers license: mit base_model: SamLowe/roberta-base-go_emotions tags: - generated_from_trainer metrics: - precision - recall model-index: - name: roberta-goemo-journals results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-goemo-journals ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
TadN427/NewTad
TadN427
2025-05-03T06:04:06Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T06:04:05Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK_tad --- # Newtad <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK_tad` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK_tad", "lora_weights": "https://huggingface.co/TadN427/NewTad/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('TadN427/NewTad', weight_name='lora.safetensors') image = pipeline('TOK_tad').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/TadN427/NewTad/discussions) to add images that show off what you’ve made with this LoRA.
DuongTrongChi/qwen2.5-it-sft-v1-test
DuongTrongChi
2025-05-03T06:01:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T06:00:57Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** DuongTrongChi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF
mradermacher
2025-05-03T06:00:13Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:grounded-ai/phi3.5-hallucination-judge-merge", "base_model:quantized:grounded-ai/phi3.5-hallucination-judge-merge", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T03:04:51Z
--- base_model: grounded-ai/phi3.5-hallucination-judge-merge language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/grounded-ai/phi3.5-hallucination-judge-merge <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ3_M.gguf) | i1-IQ3_M | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/phi3.5-hallucination-judge-merge-i1-GGUF/resolve/main/phi3.5-hallucination-judge-merge.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
XformAI-india/qwen-0.6b-reasoning
XformAI-india
2025-05-03T05:59:27Z
0
0
null
[ "safetensors", "qwen3", "reasoning", "dataset:openai/gsm8k", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:mit", "region:us" ]
null
2025-05-03T05:38:30Z
--- license: mit datasets: - openai/gsm8k base_model: - Qwen/Qwen3-0.6B tags: - reasoning --- # 🧠 Qwen-0.6B Reasoning – XformAI Fine-Tuned Model **Model:** `XformAI-india/qwen-0.6b-reasoning` **Base Model:** [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) **Architecture:** Transformer decoder (GPT-style) **Fine-Tuned By:** [XformAI](https://xformai.in) **Release Date:** May 2025 **License:** MIT --- ## 🧠 What is it? `qwen-0.6b-reasoning` is a **compact transformer model fine-tuned for reasoning, logic, and analytical thinking**. Despite its size, it demonstrates strong performance across: - 🧩 Riddles & Puzzles - 🧮 Math Word Problems - 🧠 Symbolic Reasoning - 💬 Chain-of-Thought Prompting - 🔍 Common Sense Logic > Fine-tuned on a curated instruction-style dataset focused on multi-step reasoning. --- ## 🚀 Why it Matters - Performs like a **7B model** on reasoning benchmarks - **Lightweight (600M)** and can run on CPU or mobile edge devices - Excels in **step-by-step explanations** and **problem solving** --- ## 🧪 Fine-Tuning Overview ---------------------------------------------------------- | Category | Detail | |----------------------|----------------------------------| | Base Model | Qwen 0.6B | | Target Objective | Reasoning, logic, CoT | | Fine-Tuning Type | Instruction | | Optimizer | AdamW (LoRA tuning) | | Precision | bfloat16 | | Epochs | 2 | | Max Tokens | 2048 | --- ## 🧩 Prompt Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-0.6b-reasoning") tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-0.6b-reasoning") prompt = "A farmer has 17 sheep. All but 9 run away. How many are left?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
mwalker22/AIE6-S09-b99b3324-c1e6-4624-bf18-42f7d114c011
mwalker22
2025-05-03T05:48:06Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:157", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-03T05:47:20Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:157 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l widget: - source_sentence: How does the author describe Apple’s current LLM features compared to frontier LLM capabilities? sentences: - 'Those US export regulations on GPUs to China seem to have inspired some very effective training optimizations! The environmental impact got better A welcome result of the increased efficiency of the models—both the hosted ones and the ones I can run locally—is that the energy usage and environmental impact of running a prompt has dropped enormously over the past couple of years. OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days. I have it on good authority that neither Google Gemini nor Amazon Nova (two of the least expensive model providers) are running prompts at a loss.' - 'Now that those features are rolling out they’re pretty weak. As an LLM power-user I know what these models are capable of, and Apple’s LLM features offer a pale imitation of what a frontier LLM can do. Instead we’re getting notification summaries that misrepresent news headlines and writing assistant tools that I’ve not found useful at all. Genmoji are kind of fun though. The rise of inference-scaling “reasoning” models The most interesting development in the final quarter of 2024 was the introduction of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as o1-preview and o1-mini on September 12th.' - 'A year ago, the only organization that had released a generally useful LLM was OpenAI. We’ve now seen better-than-GPT-3 class models produced by Anthropic, Mistral, Google, Meta, EleutherAI, Stability AI, TII in Abu Dhabi (Falcon), Microsoft Research, xAI, Replit, Baidu and a bunch of other organizations. The training cost (hardware and electricity) is still significant—initially millions of dollars, but that seems to have dropped to the tens of thousands already. Microsoft’s Phi-2 claims to have used “14 days on 96 A100 GPUs”, which works out at around $35,000 using current Lambda pricing.' - source_sentence: What topics are covered in the articles related to GPT and LLMs in the provided context? sentences: - 'We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt. Anthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet. With Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface. Here’s my Extract URLs app, entirely generated by Claude:' - 'Embeddings: What they are and why they matter 61.7k 79.3k Catching up on the weird world of LLMs 61.6k 85.9k llamafile is the new best way to run an LLM on your own computer 52k 66k Prompt injection explained, with video, slides, and a transcript 51k 61.9k AI-enhanced development makes me more ambitious with my projects 49.6k 60.1k Understanding GPT tokenizers 49.5k 61.1k Exploring GPTs: ChatGPT in a trench coat? 46.4k 58.5k Could you train a ChatGPT-beating model for $85,000 and run it in a browser? 40.5k 49.2k How to implement Q&A against your documentation with GPT3, embeddings and Datasette 37.3k 44.9k Lawyer cites fake cases invented by ChatGPT, judge is not amused 37.1k 47.4k' - 'Things we learned about LLMs in 2024 Simon Willison’s Weblog Subscribe Things we learned about LLMs in 2024 31st December 2024 A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments. This is a sequel to my review of 2023. In this article:' - source_sentence: How do longer inputs enhance the problem-solving capabilities of a large language model (LLM)? sentences: - 'Embeddings: What they are and why they matter 61.7k 79.3k Catching up on the weird world of LLMs 61.6k 85.9k llamafile is the new best way to run an LLM on your own computer 52k 66k Prompt injection explained, with video, slides, and a transcript 51k 61.9k AI-enhanced development makes me more ambitious with my projects 49.6k 60.1k Understanding GPT tokenizers 49.5k 61.1k Exploring GPTs: ChatGPT in a trench coat? 46.4k 58.5k Could you train a ChatGPT-beating model for $85,000 and run it in a browser? 40.5k 49.2k How to implement Q&A against your documentation with GPT3, embeddings and Datasette 37.3k 44.9k Lawyer cites fake cases invented by ChatGPT, judge is not amused 37.1k 47.4k' - 'Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.' - 'I’ve found myself using this a lot. I noticed how much I was relying on it in October and wrote Everything I built with Claude Artifacts this week, describing 14 little tools I had put together in a seven day period. Since then, a whole bunch of other teams have built similar systems. GitHub announced their version of this—GitHub Spark—in October. Mistral Chat added it as a feature called Canvas in November. Steve Krouse from Val Town built a version of it against Cerebras, showcasing how a 2,000 token/second LLM can iterate on an application with changes visible in less than a second.' - source_sentence: What was the significance of the GPT-4 barrier mentioned in the December 2023 review? sentences: - 'The environmental impact got much, much worse The much bigger problem here is the enormous competitive buildout of the infrastructure that is imagined to be necessary for these models in the future. Companies like Google, Meta, Microsoft and Amazon are all spending billions of dollars rolling out new datacenters, with a very material impact on the electricity grid and the environment. There’s even talk of spinning up new nuclear power stations, but those can take decades. Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued crash in LLM prices might hint that it’s not. But would you want to be the big tech executive that argued NOT to build out this infrastructure only to be proven wrong in a few years’ time?' - 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode, where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio input and output incredibly realistic sounding speech without needing separate TTS or STT models. The demo also sounded conspicuously similar to Scarlett Johansson... and after she complained the voice from the demo, Skye, never made it to a production product. The delay in releasing the new voice mode after the initial demo caused quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running the new features yet.' - 'The GPT-4 barrier was comprehensively broken In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s best model was almost a year old at that point, yet no other AI lab had produced anything better. What did OpenAI know that the rest of us didn’t? I’m relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.' - source_sentence: What licensing model does Qwen25-Coder-32B use? sentences: - 'Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks about Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model! I can now run a GPT-4 class model on my laptop talks about running Meta’s Llama 3.3 70B (released in December)' - 'Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works! So hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language! How should we feel about this as software engineers? On the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you?' - 'The GPT-4 barrier was comprehensively broken Some of those GPT-4 models run on my laptop LLM prices crashed, thanks to competition and increased efficiency Multimodal vision is common, audio and video are starting to emerge Voice and live camera mode are science fiction come to life Prompt driven app generation is a commodity already Universal access to the best models lasted for just a few short months “Agents” still haven’t really happened yet Evals really matter Apple Intelligence is bad, Apple’s MLX library is excellent The rise of inference-scaling “reasoning” models Was the best currently available LLM trained in China for less than $6m? The environmental impact got better The environmental impact got much, much worse' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.9166666666666666 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 1.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9166666666666666 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3333333333333333 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.20000000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.9166666666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 1.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9692441461309548 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9583333333333334 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9583333333333334 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("mwalker22/AIE6-S09-b99b3324-c1e6-4624-bf18-42f7d114c011") # Run inference sentences = [ 'What licensing model does Qwen25-Coder-32B use?', 'Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks about Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!\n\nI can now run a GPT-4 class model on my laptop talks about running Meta’s Llama 3.3 70B (released in December)', 'The GPT-4 barrier was comprehensively broken\nSome of those GPT-4 models run on my laptop\nLLM prices crashed, thanks to competition and increased efficiency\nMultimodal vision is common, audio and video are starting to emerge\nVoice and live camera mode are science fiction come to life\nPrompt driven app generation is a commodity already\nUniversal access to the best models lasted for just a few short months\n“Agents” still haven’t really happened yet\nEvals really matter\nApple Intelligence is bad, Apple’s MLX library is excellent\nThe rise of inference-scaling “reasoning” models\nWas the best currently available LLM trained in China for less than $6m?\nThe environmental impact got better\nThe environmental impact got much, much worse', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9167 | | cosine_accuracy@3 | 1.0 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.9167 | | cosine_precision@3 | 0.3333 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.9167 | | cosine_recall@3 | 1.0 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9692** | | cosine_mrr@10 | 0.9583 | | cosine_map@100 | 0.9583 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 157 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 157 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 2 tokens</li><li>mean: 20.74 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.42 tokens</li><li>max: 214 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:----------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>When did Meta release the original Llama model?</code> | <code>Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.<br>I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!<br>This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.<br>Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.</code> | | <code>What was significant about the release of Llama 2 in July?</code> | <code>Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.<br>I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!<br>This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.<br>Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.</code> | | <code>What is the new way to scale a model mentioned in the context?</code> | <code>The biggest innovation here is that it opens up a new way to scale a model: instead of improving model performance purely through additional compute at training time, models can now take on harder problems by spending more compute on inference.<br>The sequel to o1, o3 (they skipped “o2” for European trademark reasons) was announced on 20th December with an impressive result against the ARC-AGI benchmark, albeit one that likely involved more than $1,000,000 of compute time expense!<br>o3 is expected to ship in January. I doubt many people have real-world problems that would benefit from that level of compute expenditure—I certainly don’t!—but it appears to be a genuine next step in LLM architecture for taking on much harder problems.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9583 | | 2.0 | 32 | 0.9484 | | 3.0 | 48 | 0.9539 | | 3.125 | 50 | 0.9539 | | 4.0 | 64 | 0.9692 | | 5.0 | 80 | 0.9692 | | 6.0 | 96 | 0.9692 | | 6.25 | 100 | 0.9692 | | 7.0 | 112 | 0.9692 | | 8.0 | 128 | 0.9692 | | 9.0 | 144 | 0.9692 | | 9.375 | 150 | 0.9692 | | 10.0 | 160 | 0.9692 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
liamfudge/gemma-3-1b-it-Q4_K_M-GGUF
liamfudge
2025-05-03T05:46:49Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:google/gemma-3-1b-it", "base_model:quantized:google/gemma-3-1b-it", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-03T05:46:42Z
--- base_model: google/gemma-3-1b-it library_name: transformers license: gemma pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # liamfudge/gemma-3-1b-it-Q4_K_M-GGUF This model was converted to GGUF format from [`google/gemma-3-1b-it`](https://huggingface.co/google/gemma-3-1b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/gemma-3-1b-it) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo liamfudge/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo liamfudge/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo liamfudge/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo liamfudge/gemma-3-1b-it-Q4_K_M-GGUF --hf-file gemma-3-1b-it-q4_k_m.gguf -c 2048 ```
18-Jobz-Hunting-Sajal-Malik-Viral-VideoX/NEW.EXCLUSIVE.Jobz.Hunting.Sajal.Malik.viral.video.Tutorial
18-Jobz-Hunting-Sajal-Malik-Viral-VideoX
2025-05-03T05:42:37Z
0
0
null
[ "region:us" ]
null
2025-05-03T05:41:49Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Actor jobz hunting sajal malik Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo. L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok
Hachipo/Qwen2.5-7B-PIFT-jaen_1000_2
Hachipo
2025-05-03T05:39:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T05:34:54Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ivangrapher/e613ef31-81b5-45a5-ac2e-93cd24c7392c
ivangrapher
2025-05-03T05:37:25Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T05:18:10Z
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-6.7b-instruct tags: - axolotl - generated_from_trainer model-index: - name: e613ef31-81b5-45a5-ac2e-93cd24c7392c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: deepseek-ai/deepseek-coder-6.7b-instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 79ae7482d8ea96ee_train_data.json ds_type: json format: custom path: /workspace/input_data/79ae7482d8ea96ee_train_data.json type: field_instruction: text field_output: completion_a format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: ivangrapher/e613ef31-81b5-45a5-ac2e-93cd24c7392c hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/79ae7482d8ea96ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f234d8d9-7843-44ae-80fb-4dccf66214cc wandb_project: s56-7 wandb_run: your_name wandb_runid: f234d8d9-7843-44ae-80fb-4dccf66214cc warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # e613ef31-81b5-45a5-ac2e-93cd24c7392c This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.127 | 0.1335 | 150 | 1.4296 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
SmallDoge/Qwen2.5-math-14b-llmlingua-90
SmallDoge
2025-05-03T05:32:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T16:59:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jahyungu/Qwen2.5-7B-Instruct_MetaMathQA-40K_random
jahyungu
2025-05-03T05:31:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T01:20:03Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - generated_from_trainer model-index: - name: Qwen2.5-7B-Instruct_MetaMathQA-40K_random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-7B-Instruct_MetaMathQA-40K_random This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
abduraziq/assign3
abduraziq
2025-05-03T05:23:28Z
0
0
null
[ "region:us" ]
null
2025-05-03T04:51:19Z
# Contrast to Divide: self-supervised pre-training for learning with noisy labels [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/contrast-to-divide-self-supervised-pre-1/image-classification-on-mini-webvision-1-0)](https://paperswithcode.com/sota/image-classification-on-mini-webvision-1-0?p=contrast-to-divide-self-supervised-pre-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/contrast-to-divide-self-supervised-pre-1/image-classification-on-clothing1m)](https://paperswithcode.com/sota/image-classification-on-clothing1m?p=contrast-to-divide-self-supervised-pre-1) This is an official implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels". The code is based on [DivideMix](https://github.com/LiJunnan1992/DivideMix) implementation. ## Results Following tables summarize main results of the paper: CIFAR-10: ![CIFAR-10 results](./img/cifar10.png) CIFAR-100: ![CIFAR-100 results](./img/cifar100.png) Clothing1M: ![Clothing1M results](./img/clothing.png) mini-WebVision: ![mini-WebVision](./img/webvision.png) ## Running the code First you need to install dependencies by running `pip install -r requirements.txt`. You can download pretrained self-supervised models from [Google Drive](https://drive.google.com/drive/folders/1qYVdggtNFQZBZ-OqVJm80LBKUKpdLPUm?usp=sharing). Alternatively, you can train them by yourself, using [SimCLR implementation](https://github.com/HobbitLong/SupContrast). Put them into `./pretrained` folder. Then you can run the code for CIFAR ``` python3 main_cifar.py --r 0.8 --lambda_u 500 --dataset cifar100 --p_threshold 0.03 --data_path ./cifar-100 --experiment-name simclr_resnet18 --method selfsup --net resnet50 ``` for Clothing1M ``` python3 main_clothing1M.py --data_path /path/to/clothing1m --experiment-name selfsup --method selfsup --p_threshold 0.7 --warmup 5 --num_epochs 120 ``` or for mini-WebVision ``` python3 Train_webvision.py --p_threshold 0.03 --num_class 50 --data_path /path/to/webvision --imagenet_data_path /path/to/imagenet --method selfsup``` ``` To run C2D with ELR+ just use the self-suprevised pretrained models with the original [code](https://github.com/shengliu66/ELR/). ## License This project is licensed under the terms of the MIT license.
tsaksatara73/dfv
tsaksatara73
2025-05-03T05:22:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-03T05:22:40Z
--- license: creativeml-openrail-m ---
Hachipo/Qwen2.5-7B-PIFT-enja_1000_2
Hachipo
2025-05-03T05:22:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T05:17:51Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aleegis/7f0760f8-8862-4787-b7a3-74b614fd0238
aleegis
2025-05-03T05:15:11Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "region:us" ]
null
2025-05-03T03:57:04Z
--- library_name: peft license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - axolotl - generated_from_trainer model-index: - name: 7f0760f8-8862-4787-b7a3-74b614fd0238 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: elyza/Llama-3-ELYZA-JP-8B bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 13b16be7f737d1a4_train_data.json ds_type: json format: custom path: /workspace/input_data/13b16be7f737d1a4_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/7f0760f8-8862-4787-b7a3-74b614fd0238 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/13b16be7f737d1a4_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: a15fa850-4ddf-4312-aec2-39afd0e9a706 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: a15fa850-4ddf-4312-aec2-39afd0e9a706 warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # 7f0760f8-8862-4787-b7a3-74b614fd0238 This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
New-VIRAL-Gangu-chettri-7-2-kanda-video-li/NAPALI.Gangu.Chettri.Kanda.7.2.Video.OFICIAL.link
New-VIRAL-Gangu-chettri-7-2-kanda-video-li
2025-05-03T04:58:05Z
0
0
null
[ "region:us" ]
null
2025-05-03T03:43:35Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
penelitianpsmatematika/medical-classification-t5-small-v3
penelitianpsmatematika
2025-05-03T04:49:46Z
3
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-04-29T15:07:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/no_pipeline_science_30k
mlfoundations-dev
2025-05-03T04:47:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T22:47:50Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: no_pipeline_science_30k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # no_pipeline_science_30k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_science_30k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
ayushchakravarthy/phi4-mini-instruct-s1-sft
ayushchakravarthy
2025-05-03T04:46:08Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T03:48:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ayushchakravarthy/qwen3-0.6b-base-s1-sft
ayushchakravarthy
2025-05-03T04:45:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T03:53:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
XzWang/ruozhiReasoner-Qwen3-8B
XzWang
2025-05-03T04:44:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T04:38:15Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AthenaAgent42/llama-r1-ft13k-ex3
AthenaAgent42
2025-05-03T04:44:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T04:44:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jmalejandrob79/nbmaexp01
jmalejandrob79
2025-05-03T04:42:36Z
3
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-02T02:36:46Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: nbmaexp01 --- # Nbmaexp01 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `nbmaexp01` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "nbmaexp01", "lora_weights": "https://huggingface.co/jmalejandrob79/nbmaexp01/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/nbmaexp01', weight_name='lora.safetensors') image = pipeline('nbmaexp01').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jmalejandrob79/nbmaexp01/discussions) to add images that show off what you’ve made with this LoRA.
keplersystems/kepler-urdu-poetry-tiny
keplersystems
2025-05-03T04:28:07Z
0
0
null
[ "safetensors", "qwen3", "text-generation", "conversational", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "region:us" ]
text-generation
2025-05-03T01:26:00Z
--- base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation ---
era-temporary/eb-man-7b-stage2-after-stage1-lr-1e-5-lora-e2
era-temporary
2025-05-03T04:24:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct", "region:us" ]
null
2025-05-03T04:23:01Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
sophiayk20/blip-gqa-ft-trial2
sophiayk20
2025-05-03T04:23:46Z
0
0
transformers
[ "transformers", "safetensors", "blip-2", "visual-question-answering", "generated_from_trainer", "base_model:Salesforce/blip2-opt-2.7b", "base_model:finetune:Salesforce/blip2-opt-2.7b", "license:mit", "endpoints_compatible", "region:us" ]
visual-question-answering
2025-05-02T22:18:14Z
--- library_name: transformers license: mit base_model: Salesforce/blip2-opt-2.7b tags: - generated_from_trainer model-index: - name: blip-gqa-ft-trial2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blip-gqa-ft-trial2 This model is a fine-tuned version of [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0074 | 1.0 | 313 | 2.0112 | | 1.7625 | 2.0 | 626 | 1.9272 | | 1.853 | 3.0 | 939 | 1.8629 | | 1.6087 | 4.0 | 1252 | 1.8508 | | 1.6017 | 4.9856 | 1560 | 1.8559 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.0+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1
BABYSHARK09/New58
BABYSHARK09
2025-05-03T04:18:12Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T03:01:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
earcherc/girl1
earcherc
2025-05-03T04:14:09Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-05-03T04:12:08Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/ComfyICU_00001_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # girl1 <Gallery /> ## Model description First attempt LoRA ## Download model Weights for this model are available in Safetensors format. [Download](/earcherc/girl1/tree/main) them in the Files & versions tab.
cyberbabooshka/post_pretrain_pre_cooldown
cyberbabooshka
2025-05-03T04:11:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "axolotl", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T04:11:11Z
--- library_name: transformers tags: - axolotl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fats-fme/11813507-b1af-412e-a487-858d4ea24855
fats-fme
2025-05-03T04:08:19Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "region:us" ]
null
2025-05-03T03:59:43Z
--- library_name: peft license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - axolotl - generated_from_trainer model-index: - name: 11813507-b1af-412e-a487-858d4ea24855 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: elyza/Llama-3-ELYZA-JP-8B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 13b16be7f737d1a4_train_data.json ds_type: json format: custom path: /workspace/input_data/13b16be7f737d1a4_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: fats-fme/11813507-b1af-412e-a487-858d4ea24855 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 130GB max_steps: 50 micro_batch_size: 1 mlflow_experiment_name: /tmp/13b16be7f737d1a4_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 saves_per_epoch: null sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a15fa850-4ddf-4312-aec2-39afd0e9a706 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: a15fa850-4ddf-4312-aec2-39afd0e9a706 warmup_steps: 200 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 11813507-b1af-412e-a487-858d4ea24855 This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0012 | 1 | 1.1470 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
BABYSHARK09/New56
BABYSHARK09
2025-05-03T04:07:11Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T03:00:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KushGupster/granite-3-flux-1-Q8_0-GGUF
KushGupster
2025-05-03T04:06:08Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:KushGupster/granite-3-flux-1", "base_model:quantized:KushGupster/granite-3-flux-1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T04:05:23Z
--- base_model: KushGupster/granite-3-flux-1 tags: - llama-cpp - gguf-my-repo --- # KushGupster/granite-3-flux-1-Q8_0-GGUF This model was converted to GGUF format from [`KushGupster/granite-3-flux-1`](https://huggingface.co/KushGupster/granite-3-flux-1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/KushGupster/granite-3-flux-1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo KushGupster/granite-3-flux-1-Q8_0-GGUF --hf-file granite-3-flux-1-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo KushGupster/granite-3-flux-1-Q8_0-GGUF --hf-file granite-3-flux-1-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo KushGupster/granite-3-flux-1-Q8_0-GGUF --hf-file granite-3-flux-1-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo KushGupster/granite-3-flux-1-Q8_0-GGUF --hf-file granite-3-flux-1-q8_0.gguf -c 2048 ```
Zack-Z/gemma3_27bi_cotsft_rs0_2_5cut_gem3all_e2
Zack-Z
2025-05-03T04:02:05Z
0
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-27b-it", "base_model:finetune:unsloth/gemma-3-27b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T01:44:54Z
--- base_model: unsloth/gemma-3-27b-it tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Zack-Z - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-27b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
penelitianpsmatematika/medical-text-generation-t5-small-v1
penelitianpsmatematika
2025-05-03T03:59:13Z
6
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-04-29T08:53:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NikolayKozloff/Qwen3-16B-A3B-Q5_K_S-GGUF
NikolayKozloff
2025-05-03T03:47:54Z
0
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:kalomaze/Qwen3-16B-A3B", "base_model:quantized:kalomaze/Qwen3-16B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T03:47:07Z
--- base_model: kalomaze/Qwen3-16B-A3B license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/Qwen3-16B-A3B-Q5_K_S-GGUF This model was converted to GGUF format from [`kalomaze/Qwen3-16B-A3B`](https://huggingface.co/kalomaze/Qwen3-16B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/kalomaze/Qwen3-16B-A3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Qwen3-16B-A3B-Q5_K_S-GGUF --hf-file qwen3-16b-a3b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Qwen3-16B-A3B-Q5_K_S-GGUF --hf-file qwen3-16b-a3b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Qwen3-16B-A3B-Q5_K_S-GGUF --hf-file qwen3-16b-a3b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Qwen3-16B-A3B-Q5_K_S-GGUF --hf-file qwen3-16b-a3b-q5_k_s.gguf -c 2048 ```
RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf
RichardErkhov
2025-05-03T03:44:58Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T01:42:21Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3_it_dpo_list_and_bold - GGUF - Model creator: https://huggingface.co/1231czx/ - Original model: https://huggingface.co/1231czx/llama3_it_dpo_list_and_bold/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3_it_dpo_list_and_bold.Q2_K.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3_it_dpo_list_and_bold.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3_it_dpo_list_and_bold.IQ3_S.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3_it_dpo_list_and_bold.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3_it_dpo_list_and_bold.IQ3_M.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3_it_dpo_list_and_bold.Q3_K.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3_it_dpo_list_and_bold.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3_it_dpo_list_and_bold.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3_it_dpo_list_and_bold.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3_it_dpo_list_and_bold.Q4_0.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3_it_dpo_list_and_bold.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3_it_dpo_list_and_bold.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3_it_dpo_list_and_bold.Q4_K.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3_it_dpo_list_and_bold.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3_it_dpo_list_and_bold.Q4_1.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3_it_dpo_list_and_bold.Q5_0.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3_it_dpo_list_and_bold.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3_it_dpo_list_and_bold.Q5_K.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3_it_dpo_list_and_bold.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3_it_dpo_list_and_bold.Q5_1.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3_it_dpo_list_and_bold.Q6_K.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3_it_dpo_list_and_bold.Q8_0.gguf](https://huggingface.co/RichardErkhov/1231czx_-_llama3_it_dpo_list_and_bold-gguf/blob/main/llama3_it_dpo_list_and_bold.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BABYSHARK09/New47
BABYSHARK09
2025-05-03T03:37:38Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:59:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BABYSHARK09/New48
BABYSHARK09
2025-05-03T03:37:36Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T03:00:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DoppelReflEx/MiniusLight-24B-v2.2a-test-Q4_K_S-GGUF
DoppelReflEx
2025-05-03T03:34:32Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DoppelReflEx/MiniusLight-24B-v2.2a-test", "base_model:quantized:DoppelReflEx/MiniusLight-24B-v2.2a-test", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T03:33:29Z
--- base_model: DoppelReflEx/MiniusLight-24B-v2.2a-test library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # DoppelReflEx/MiniusLight-24B-v2.2a-test-Q4_K_S-GGUF This model was converted to GGUF format from [`DoppelReflEx/MiniusLight-24B-v2.2a-test`](https://huggingface.co/DoppelReflEx/MiniusLight-24B-v2.2a-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DoppelReflEx/MiniusLight-24B-v2.2a-test) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo DoppelReflEx/MiniusLight-24B-v2.2a-test-Q4_K_S-GGUF --hf-file miniuslight-24b-v2.2a-test-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo DoppelReflEx/MiniusLight-24B-v2.2a-test-Q4_K_S-GGUF --hf-file miniuslight-24b-v2.2a-test-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo DoppelReflEx/MiniusLight-24B-v2.2a-test-Q4_K_S-GGUF --hf-file miniuslight-24b-v2.2a-test-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo DoppelReflEx/MiniusLight-24B-v2.2a-test-Q4_K_S-GGUF --hf-file miniuslight-24b-v2.2a-test-q4_k_s.gguf -c 2048 ```
grimjim/MagTie-v1-12B
grimjim
2025-05-03T03:31:42Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "base_model:Delta-Vector/Francois-Huali-12B", "base_model:merge:Delta-Vector/Francois-Huali-12B", "base_model:grimjim/Magnolia-v3-12B", "base_model:merge:grimjim/Magnolia-v3-12B", "base_model:grimjim/mistralai-Mistral-Nemo-Base-2407", "base_model:merge:grimjim/mistralai-Mistral-Nemo-Base-2407", "base_model:inflatebot/MN-12B-Mag-Mell-R1", "base_model:merge:inflatebot/MN-12B-Mag-Mell-R1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:47:26Z
--- base_model: - Delta-Vector/Francois-Huali-12B - grimjim/mistralai-Mistral-Nemo-Base-2407 - grimjim/Magnolia-v3-12B - inflatebot/MN-12B-Mag-Mell-R1 library_name: transformers pipeline_tag: text-generation tags: - mergekit - merge license: apache-2.0 --- # MagTie-v1-12B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). We used a pretrained base model as the base for a DARE-TIES merge, compensating by boosting the weights and densities in order to retain more training from the contributing models. ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [grimjim/mistralai-Mistral-Nemo-Base-2407](https://huggingface.co/grimjim/mistralai-Mistral-Nemo-Base-2407) as a base. ### Models Merged The following models were included in the merge: * [Delta-Vector/Francois-Huali-12B](https://huggingface.co/Delta-Vector/Francois-Huali-12B) * [grimjim/Magnolia-v3-12B](https://huggingface.co/grimjim/Magnolia-v3-12B) * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: grimjim/mistralai-Mistral-Nemo-Base-2407 models: - model: grimjim/mistralai-Mistral-Nemo-Base-2407 - model: inflatebot/MN-12B-Mag-Mell-R1 parameters: weight: 0.85 density: 0.75 - model: Delta-Vector/Francois-Huali-12B parameters: weight: 0.85 density: 0.75 - model: grimjim/Magnolia-v3-12B parameters: weight: 0.85 density: 0.75 merge_method: dare_ties parameters: normalize: true int8_mask: true dtype: bfloat16 ```
hesiii/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soft_tame_condor
hesiii
2025-05-03T03:30:56Z
16
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am soft tame condor", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T23:37:57Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soft_tame_condor tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am soft tame condor - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soft_tame_condor This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hesiii/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-soft_tame_condor", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
punitub01/llama2-7b-qlora-finetuned
punitub01
2025-05-03T03:29:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T03:28:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BABYSHARK09/New45
BABYSHARK09
2025-05-03T03:25:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:59:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NoorNizar/Phi-4-mini-instruct-WINT4
NoorNizar
2025-05-03T03:25:18Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "llmcompressor", "quantization", "wint4", "conversational", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-05-03T03:23:31Z
--- library_name: transformers tags: - llmcompressor - quantization - wint4 --- # Phi-4-mini-instruct-WINT4 This model is a 4-bit quantized version of [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) "using the [llmcompressor](https://github.com/neuralmagic/llmcompressor) library. ## Quantization Details * **Base Model:** [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) * **Quantization Library:** `llmcompressor` * **Quantization Method:** Weight-only 4-bit int (WINT4) * **Quantization Recipe:** ```yaml quant_stage: quant_modifiers: QuantizationModifier: ignore: [lm_head] config_groups: group_0: weights: {num_bits: 4, type: int, symmetric: true, strategy: channel, dynamic: false} targets: [Linear] ``` ## Evaluation Results The following table shows the evaluation results on various benchmarks compared to the baseline (non-quantized) model. | Task | Baseline Metric (10.0% Threshold) | Quantized Metric | Metric Type | |------------------|-------------------------------------------------------|------------------|---------------------| | winogrande | 0.7545 | 0.6985 | acc,none | ## How to Use You can load the quantized model and tokenizer using the `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "NoorNizar/Phi-4-mini-instruct-WINT4" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_id) # Example usage (replace with your specific task) prompt = "Hello, world!" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Disclaimer This model was quantized automatically using a script. Performance and behavior might differ slightly from the original base model.
mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF
mradermacher
2025-05-03T03:21:38Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:IntelLabs/sqft-sparsepeft-phi-3-mini-4k-30-math-heu", "base_model:quantized:IntelLabs/sqft-sparsepeft-phi-3-mini-4k-30-math-heu", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T21:18:45Z
--- base_model: IntelLabs/sqft-sparsepeft-phi-3-mini-4k-30-math-heu language: en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/IntelLabs/sqft-sparsepeft-phi-3-mini-4k-30-math-heu <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.IQ4_XS.gguf) | IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q3_K_L.gguf) | Q3_K_L | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q6_K.gguf) | Q6_K | 3.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/sqft-sparsepeft-phi-3-mini-4k-30-math-heu-GGUF/resolve/main/sqft-sparsepeft-phi-3-mini-4k-30-math-heu.f16.gguf) | f16 | 7.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
toilahonganh1712/tinyllama-bnb-4bit-travelvungtau360
toilahonganh1712
2025-05-03T03:16:35Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:finetune:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T03:16:27Z
--- base_model: unsloth/tinyllama-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** toilahonganh1712 - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
thavens-research/Qwen2.5-0.5B-Instruct
thavens-research
2025-05-03T03:12:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T03:10:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sakhalif10/fluxoldvhseffect
sakhalif10
2025-05-03T03:10:14Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-05-03T03:10:09Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/VHS+Trailer+v3+4-3.00_00_48_26.Still001.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: apache-2.0 --- # vhs-old-effect-flux <Gallery /> ## Model description this is my first flux loras ## Download model [Download](/sakhalif10/fluxoldvhseffect/tree/main) them in the Files & versions tab.
memeviss/zombieIV_6
memeviss
2025-05-03T03:01:24Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T02:34:26Z
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
mradermacher/Qwen3-32B-Uncensored-i1-GGUF
mradermacher
2025-05-03T03:00:12Z
0
3
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:Guilherme34/uncensor", "base_model:nicoboss/Qwen3-32B-Uncensored", "base_model:quantized:nicoboss/Qwen3-32B-Uncensored", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-02T23:34:19Z
--- base_model: nicoboss/Qwen3-32B-Uncensored datasets: - Guilherme34/uncensor language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/nicoboss/Qwen3-32B-Uncensored <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 11.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF/resolve/main/Qwen3-32B-Uncensored.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
MinaMila/phi3_unlearned_LoRa_ACSEmployment_2_cfda_ep6_22
MinaMila
2025-05-03T02:55:13Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T02:55:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Membersuger/Euro_5
Membersuger
2025-05-03T02:54:40Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:31:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flyingbugs/Qwen2.5-Math-7B-generalthoughts-0.5-token-prune
flyingbugs
2025-05-03T02:52:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:flyingbugs/GeneralThought-195K-pruned-keep-0.5-token-prune", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T21:20:03Z
--- base_model: Qwen/Qwen2.5-Math-7B-Instruct datasets: flyingbugs/GeneralThought-195K-pruned-keep-0.5-token-prune library_name: transformers model_name: Qwen2.5-Math-7B-generalthoughts-0.5-token-prune tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen2.5-Math-7B-generalthoughts-0.5-token-prune This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [flyingbugs/GeneralThought-195K-pruned-keep-0.5-token-prune](https://huggingface.co/datasets/flyingbugs/GeneralThought-195K-pruned-keep-0.5-token-prune) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="flyingbugs/Qwen2.5-Math-7B-generalthoughts-0.5-token-prune", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jjh233/huggingface/runs/5bizs4qo) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1+cu121 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Alphatao/3c248048-d823-41e8-acd1-08b0985334a5
Alphatao
2025-05-03T02:50:45Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2.5-Math-1.5B", "base_model:finetune:unsloth/Qwen2.5-Math-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T23:39:22Z
--- base_model: unsloth/Qwen2.5-Math-1.5B library_name: transformers model_name: 3c248048-d823-41e8-acd1-08b0985334a5 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 3c248048-d823-41e8-acd1-08b0985334a5 This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alphatao/3c248048-d823-41e8-acd1-08b0985334a5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/qrfff2s8) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chchen/MentaLLaMA-chat-7B-PsyCourse-doc-info-fold9
chchen
2025-05-03T02:50:44Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:klyang/MentaLLaMA-chat-7B-hf", "base_model:adapter:klyang/MentaLLaMA-chat-7B-hf", "license:mit", "region:us" ]
null
2025-05-03T01:14:11Z
--- library_name: peft license: mit base_model: klyang/MentaLLaMA-chat-7B-hf tags: - llama-factory - lora - generated_from_trainer model-index: - name: MentaLLaMA-chat-7B-PsyCourse-doc-info-fold9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MentaLLaMA-chat-7B-PsyCourse-doc-info-fold9 This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-doc-info-train-fold9 dataset. It achieves the following results on the evaluation set: - Loss: 0.0834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3604 | 0.3951 | 10 | 0.3692 | | 1.0978 | 0.7901 | 20 | 0.2423 | | 0.1519 | 1.1852 | 30 | 0.1737 | | 0.1384 | 1.5802 | 40 | 0.1437 | | 0.1076 | 1.9753 | 50 | 0.1253 | | 0.1085 | 2.3704 | 60 | 0.1120 | | 0.0884 | 2.7654 | 70 | 0.1006 | | 0.1071 | 3.1605 | 80 | 0.0919 | | 0.0761 | 3.5556 | 90 | 0.0892 | | 0.0661 | 3.9506 | 100 | 0.0851 | | 0.0532 | 4.3457 | 110 | 0.0835 | | 0.0653 | 4.7407 | 120 | 0.0834 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
jnjj/instruction-model
jnjj
2025-05-03T02:46:40Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T21:20:36Z
--- library_name: transformers ---
luckycanucky/discord_model_x3_16b
luckycanucky
2025-05-03T02:45:52Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:42:36Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** luckycanucky - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jahyungu/Llama-3.2-1B-Instruct_Open-Critic-GPT_9
jahyungu
2025-05-03T02:41:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:31:41Z
--- library_name: transformers license: llama3.2 base_model: meta-llama/Llama-3.2-1B-Instruct tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-Instruct_Open-Critic-GPT_9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-Instruct_Open-Critic-GPT_9 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
LandCruiser/sn21_omegav1_0305_1
LandCruiser
2025-05-03T02:35:24Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-03T02:18:57Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
jhyun0414/20250503-Llama-3.1-8B-Instruct-gemini_label-filter-e3-lr2e-06
jhyun0414
2025-05-03T02:32:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:26:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kejones/results
kejones
2025-05-03T02:27:31Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-02T19:25:43Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
ivangrapher/fbb5f401-ffa7-4787-93e4-bc6e09a1450e
ivangrapher
2025-05-03T02:24:54Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Llama-3.2-1B", "base_model:adapter:NousResearch/Llama-3.2-1B", "license:llama3.2", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T02:19:18Z
--- library_name: peft license: llama3.2 base_model: NousResearch/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: fbb5f401-ffa7-4787-93e4-bc6e09a1450e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: NousResearch/Llama-3.2-1B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 3293ce73be5009ec_train_data.json ds_type: json format: custom path: /workspace/input_data/3293ce73be5009ec_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: ivangrapher/fbb5f401-ffa7-4787-93e4-bc6e09a1450e hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/3293ce73be5009ec_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a0ab280a-c85a-410f-a2fe-19bf02a514ec wandb_project: s56-7 wandb_run: your_name wandb_runid: a0ab280a-c85a-410f-a2fe-19bf02a514ec warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # fbb5f401-ffa7-4787-93e4-bc6e09a1450e This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9193 | 0.0853 | 150 | 0.8713 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
BenevolenceMessiah/Qwen3-14B-Enhanced-v1.0-DARE-TIES-Q8_0-GGUF
BenevolenceMessiah
2025-05-03T02:21:53Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES", "base_model:quantized:BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T02:21:41Z
--- base_model: BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES-Q8_0-GGUF This model was converted to GGUF format from [`BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES`](https://huggingface.co/BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES-Q8_0-GGUF --hf-file qwen-3-14b-enhanced-v1.0-dare-ties-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES-Q8_0-GGUF --hf-file qwen-3-14b-enhanced-v1.0-dare-ties-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES-Q8_0-GGUF --hf-file qwen-3-14b-enhanced-v1.0-dare-ties-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BenevolenceMessiah/Qwen-3-14B-Enhanced-v1.0-DARE-TIES-Q8_0-GGUF --hf-file qwen-3-14b-enhanced-v1.0-dare-ties-q8_0.gguf -c 2048 ```
vermoney/7d485ece-7d7a-4c1c-8d11-676bd95a0643
vermoney
2025-05-03T02:21:13Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Llama-3.2-1B", "base_model:adapter:NousResearch/Llama-3.2-1B", "license:llama3.2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T02:19:39Z
--- library_name: peft license: llama3.2 base_model: NousResearch/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: 7d485ece-7d7a-4c1c-8d11-676bd95a0643 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Llama-3.2-1B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3293ce73be5009ec_train_data.json ds_type: json format: custom path: /workspace/input_data/3293ce73be5009ec_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vermoney/7d485ece-7d7a-4c1c-8d11-676bd95a0643 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/3293ce73be5009ec_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a0ab280a-c85a-410f-a2fe-19bf02a514ec wandb_project: s56-9 wandb_run: your_name wandb_runid: a0ab280a-c85a-410f-a2fe-19bf02a514ec warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 7d485ece-7d7a-4c1c-8d11-676bd95a0643 This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8896 | 0.1138 | 200 | 0.8508 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jairosolare/ArashaLalani_biglust16_LoRa
jairosolare
2025-05-03T02:20:58Z
0
0
null
[ "region:us" ]
null
2025-05-03T02:19:45Z
sdxl lora trained on biglust 1.6 works well with DMD2 lora sampler: lcm Karras weight: 1.0-ish steps:10-14 trigger= celeb name
ellietang/hf_saved_lora_amf-modCase-qwen-coder-14B-SFT-after-CPT-try1-no-SYSTEM_PROMPT
ellietang
2025-05-03T02:19:33Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T02:19:25Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BenevolenceMessiah/Qwen3-14B-Enhanced-v1.0-DARE-TIES
BenevolenceMessiah
2025-05-03T02:19:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:Ba2han/Qwen-3-14B-Gemini-v0.1", "base_model:merge:Ba2han/Qwen-3-14B-Gemini-v0.1", "base_model:Qwen/Qwen3-14B", "base_model:merge:Qwen/Qwen3-14B", "base_model:secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation-qwen_3_14b", "base_model:merge:secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation-qwen_3_14b", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T02:17:28Z
--- base_model: - Ba2han/Qwen-3-14B-Gemini-v0.1 - secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation-qwen_3_14b - Qwen/Qwen3-14B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) as a base. ### Models Merged The following models were included in the merge: * [Ba2han/Qwen-3-14B-Gemini-v0.1](https://huggingface.co/Ba2han/Qwen-3-14B-Gemini-v0.1) * [secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation-qwen_3_14b](https://huggingface.co/secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation-qwen_3_14b) ### Configuration The following YAML configuration was used to produce this model: ```yaml # Qwen-3-14B-Enhanced-v1.0-DARE-TIES merge_method: dare_ties base_model: Qwen/Qwen3-14B parameters: density: 0.333 random_seed: 37 models: - model: secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation-qwen_3_14b parameters: weight: 0.5 - model: Ba2han/Qwen-3-14B-Gemini-v0.1 parameters: weight: 0.5 tokenizer: source: union chat_template: auto dtype: bfloat16 ```
mradermacher/Qwen3-32B-Uncensored-GGUF
mradermacher
2025-05-03T02:18:49Z
0
1
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:Guilherme34/uncensor", "base_model:nicoboss/Qwen3-32B-Uncensored", "base_model:quantized:nicoboss/Qwen3-32B-Uncensored", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-02T23:05:43Z
--- base_model: nicoboss/Qwen3-32B-Uncensored datasets: - Guilherme34/uncensor language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/nicoboss/Qwen3-32B-Uncensored <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q5_K_M.gguf) | Q5_K_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-32B-Uncensored-GGUF/resolve/main/Qwen3-32B-Uncensored.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
smrc/fr-qc-turbo-omg-token
smrc
2025-05-03T02:17:33Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T02:17:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jairosolare/SabrinaCarpenter_biglust16_LoRa
jairosolare
2025-05-03T02:16:37Z
0
0
null
[ "region:us" ]
null
2025-05-03T02:15:17Z
sdxl lora trained on biglust 1.6 works well with DMD2 lora sampler: lcm Karras weight: 1.0-ish steps:10-14 trigger= celeb name
jairosolare/DishaPatani_biglust16_LoRa
jairosolare
2025-05-03T02:12:06Z
0
0
null
[ "region:us" ]
null
2025-05-03T02:09:43Z
sdxl lora trained on biglust 1.6 works well with DMD2 lora sampler: lcm Karras weight: 1.0-ish steps:10-14 trigger= celeb name credit to creator: https://civitai.com/models/1421562/disha-patani-sdxl?modelVersionId=1606785