modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 06:28:24
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 06:28:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
takuya23/SD-model-fp16
|
takuya23
| 2025-08-08T03:00:10Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2023-06-17T07:23:07Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/takuya23/SD-model-fp16/e7f258d7c7a381962d8ed6c91dba066de8e6f421/README.md?%2Ftakuya23%2FSD-model-fp16%2Fresolve%2Fmain%2FREADME.md=&etag=%22688a60d5a1ce0074ac0b8b190b30fae93735997f%22
|
thejaminator/1e-4-mia_qwen3_32b-20250808_101136-3epoch
|
thejaminator
| 2025-08-08T02:27:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T02:26:14Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/thejaminator/1e-4-mia_qwen3_32b-20250808_101136-3epoch/cdf826bafc9790729186d7fcd6a943d29e34a377/README.md?%2Fthejaminator%2F1e-4-mia_qwen3_32b-20250808_101136-3epoch%2Fresolve%2Fmain%2FREADME.md=&etag=%22ac3a4140222a73aa3a8da9ca44eb732dd40e8587%22
|
giovannidemuri/llama8b-er-afg-v72-seed2-hx
|
giovannidemuri
| 2025-08-08T02:05:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T00:05:37Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/giovannidemuri/llama8b-er-afg-v72-seed2-hx/155e022bbd532196f91eff4fce3477a0466c5e33/README.md?%2Fgiovannidemuri%2Fllama8b-er-afg-v72-seed2-hx%2Fresolve%2Fmain%2FREADME.md=&etag=%2241669c0e84d44d865175013b9518f60a36f6ee89%22
|
DungND1107/qwentran
|
DungND1107
| 2025-08-08T00:49:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T00:49:02Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/DungND1107/qwentran/9fcced4630002a0863e9fe1251e995c63644d363/README.md?%2FDungND1107%2Fqwentran%2Fresolve%2Fmain%2FREADME.md=&etag=%22ed849bb70a9b0823520a2be4273412f20299da8e%22
|
anikifoss/Kimi-K2-Instruct-DQ4_K
|
anikifoss
| 2025-08-08T00:04:21Z | 649 | 3 | null |
[
"gguf",
"mla",
"conversational",
"text-generation",
"base_model:moonshotai/Kimi-K2-Instruct",
"base_model:quantized:moonshotai/Kimi-K2-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-14T18:24:32Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/anikifoss/Kimi-K2-Instruct-DQ4_K/d376366507962303e754595516168768831a88a8/README.md?%2Fanikifoss%2FKimi-K2-Instruct-DQ4_K%2Fresolve%2Fmain%2FREADME.md=&etag=%224dc511888cbcd8988a975982e8fd7e0b28dab614%22
|
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_lethal_heron
|
hamid1232
| 2025-08-07T23:49:49Z | 99 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am grassy_lethal_heron",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T17:19:41Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_lethal_heron/a3c4a5bb13621ee5438273e09a0e59d12eaefca5/README.md?%2Fhamid1232%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_lethal_heron%2Fresolve%2Fmain%2FREADME.md=&etag=%225285a4823daa6ba202b5fa89fde4c9f348042fa1%22
|
engakyildiz/Qwen3-0.6B-Gensyn-Swarm-agile_gregarious_dolphin
|
engakyildiz
| 2025-08-07T23:22:38Z | 69 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am agile_gregarious_dolphin",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-26T14:49:10Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/engakyildiz/Qwen3-0.6B-Gensyn-Swarm-agile_gregarious_dolphin/68aa569c809dffbf225077350979c33559afdb0d/README.md?%2Fengakyildiz%2FQwen3-0.6B-Gensyn-Swarm-agile_gregarious_dolphin%2Fresolve%2Fmain%2FREADME.md=&etag=%22a9150355e974b758ecd7161a5df4e96b584422be%22
|
mohammadmahdinouri/modernAlbert-distilled-1T
|
mohammadmahdinouri
| 2025-08-07T23:16:31Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ModernALBERT",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T14:19:14Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/mohammadmahdinouri/modernAlbert-distilled-1T/01536cee55f7d54cbcfc9d07343f968a05a7dc54/README.md?%2Fmohammadmahdinouri%2FmodernAlbert-distilled-1T%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
|
MajorJalud/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_strong_gull
|
MajorJalud
| 2025-08-07T22:24:27Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am reptilian strong gull",
"trl",
"genrl-swarm",
"I am reptilian_strong_gull",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-10T18:14:27Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/MajorJalud/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_strong_gull/e594268cc358b2192e58eeedfa9d38a6cd534a7f/README.md?%2FMajorJalud%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-reptilian_strong_gull%2Fresolve%2Fmain%2FREADME.md=&etag=%2208d914dc8ff6b363e690201f7e147b6e691d9888%22
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754604558
|
zenqqq
| 2025-08-07T22:20:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T22:20:01Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754604558/2943b172ce4a0d4fcfbf7609d1432ccbe970a924/README.md?%2Fzenqqq%2Fblockassist-bc-restless_reptilian_caterpillar_1754604558%2Fresolve%2Fmain%2FREADME.md=&etag=%22ef603c6c082b94ef1d9a910d5e8dc8b5891bafe5%22
|
bgunlp/qwen3-8b-sft-cot-qd-suff-4bit-1ep
|
bgunlp
| 2025-08-07T22:07:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T22:06:25Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/bgunlp/qwen3-8b-sft-cot-qd-suff-4bit-1ep/dcf7d7aea091881b4974cbce3852eb27ad8e8be8/README.md?%2Fbgunlp%2Fqwen3-8b-sft-cot-qd-suff-4bit-1ep%2Fresolve%2Fmain%2FREADME.md=&etag=%229d5c7c556b3f072420b3ae169a9ea7ef28e76582%22
|
vnyaryan/bitnet-lora-injected01
|
vnyaryan
| 2025-08-07T21:55:08Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bitnet",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T23:33:05Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/vnyaryan/bitnet-lora-injected01/064762f82f2f68243eb74dc7e85d60a210e41501/README.md?%2Fvnyaryan%2Fbitnet-lora-injected01%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22
|
ElRompeAnosFullAnal/ElRompeAnosFullAnal
|
ElRompeAnosFullAnal
| 2025-08-07T20:34:17Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-31T22:45:18Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/ElRompeAnosFullAnal/ElRompeAnosFullAnal/c7cbbc7eaceded9d27abe22a133754d59933010a/README.md?%2FElRompeAnosFullAnal%2FElRompeAnosFullAnal%2Fresolve%2Fmain%2FREADME.md=&etag=%221f48c32e9df46945f0ac7142bfda001a1beb4747%22
|
MikeRoz/GLM-4.5-exl3
|
MikeRoz
| 2025-08-07T20:02:35Z | 8 | 4 |
exllamav3
|
[
"exllamav3",
"exl3",
"text-generation",
"en",
"zh",
"base_model:zai-org/GLM-4.5",
"base_model:quantized:zai-org/GLM-4.5",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-06T01:47:41Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/MikeRoz/GLM-4.5-exl3/fffee627f35fb4ab3595d7d03e368045c53c59c3/README.md?%2FMikeRoz%2FGLM-4.5-exl3%2Fresolve%2Fmain%2FREADME.md=&etag=%22dbe1ab5fedcd0ce14e8875cafbdc8c9f2975b809%22
|
Washere-1/outputs
|
Washere-1
| 2025-08-07T19:40:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:Washere-1/kinyarwanda_reasoning",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T19:39:38Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/Washere-1/outputs/312ea51ea0553bee9783b967cec7cac31646db88/README.md?%2FWashere-1%2Foutputs%2Fresolve%2Fmain%2FREADME.md=&etag=%2294ce53c9b31b35e0f6f69e8110ef0319c635dab1%22
|
nikhil061307/contrastive-learning-bert-added-token
|
nikhil061307
| 2025-08-07T18:29:25Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"modernbert",
"feature-extraction",
"clinical-notes",
"contrastive-learning",
"sentence-embeddings",
"medical-nlp",
"clinical-modernbert",
"en",
"dataset:clinical-notes",
"base_model:Simonlee711/Clinical_ModernBERT",
"base_model:finetune:Simonlee711/Clinical_ModernBERT",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-07T17:04:14Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/nikhil061307/contrastive-learning-bert-added-token/3fe582d32240957ae585eb8d5e1a430d03824d25/README.md?%2Fnikhil061307%2Fcontrastive-learning-bert-added-token%2Fresolve%2Fmain%2FREADME.md=&etag=%22100c8ebbffd02e5aa1e829cbd5a675fc5161016f%22
|
jshargo/gemma-3N-4B-intelli-Q8_0-GGUF
|
jshargo
| 2025-08-07T17:49:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3n",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:jshargo/gemma-3N-4B-intelli",
"base_model:quantized:jshargo/gemma-3N-4B-intelli",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T17:49:10Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/jshargo/gemma-3N-4B-intelli-Q8_0-GGUF/2803f2d21a79e3e0f5509a3bc5919a98bd3b686a/README.md?%2Fjshargo%2Fgemma-3N-4B-intelli-Q8_0-GGUF%2Fresolve%2Fmain%2FREADME.md=&etag=%22f2ea7629dc9cb5f17d082954b4637300e95db9c7%22
|
LT3/definitions-all-llama-8B-instruct
|
LT3
| 2025-08-07T17:23:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"region:us"
] | null | 2025-08-05T14:20:47Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/LT3/definitions-all-llama-8B-instruct/2b884d534f94f67c0f65129e7b697b062ecd862c/README.md?%2FLT3%2Fdefinitions-all-llama-8B-instruct%2Fresolve%2Fmain%2FREADME.md=&etag=%22422907d21b4268d677dd3a99de3873b701836ad7%22
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754586014
|
zenqqq
| 2025-08-07T17:07:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T17:07:08Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/zenqqq/blockassist-bc-restless_reptilian_caterpillar_1754586014/a637dd68b8e6bf5aa9c06df9e42b81dbfdb4e504/README.md?%2Fzenqqq%2Fblockassist-bc-restless_reptilian_caterpillar_1754586014%2Fresolve%2Fmain%2FREADME.md=&etag=%22ef603c6c082b94ef1d9a910d5e8dc8b5891bafe5%22
|
ketchup123/grpo_nb
|
ketchup123
| 2025-08-07T16:49:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2-0.5B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"region:us"
] |
text-generation
| 2025-08-07T16:48:58Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/ketchup123/grpo_nb/766f1b50445add638a93cf321bb2544ada8d34f9/README.md?%2Fketchup123%2Fgrpo_nb%2Fresolve%2Fmain%2FREADME.md=&etag=%223fcedc2a65b9eeb6a30fdff413adc5dcb04c8831%22
|
KCh3dRi4n/gongyil-exaone
|
KCh3dRi4n
| 2025-08-07T16:35:57Z | 77 | 0 |
mlx
|
[
"mlx",
"safetensors",
"exaone4",
"lg-ai",
"exaone",
"exaone-4.0",
"text-generation",
"conversational",
"en",
"ko",
"es",
"base_model:LGAI-EXAONE/EXAONE-4.0-1.2B",
"base_model:finetune:LGAI-EXAONE/EXAONE-4.0-1.2B",
"license:other",
"region:us"
] |
text-generation
| 2025-07-29T10:19:04Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/KCh3dRi4n/gongyil-exaone/8488df3b53126a698298b57c37f3058d967bf896/README.md?%2FKCh3dRi4n%2Fgongyil-exaone%2Fresolve%2Fmain%2FREADME.md=&etag=%2231eb4f9ee206e65ac69148a9daefe4cfc8a97900%22
|
abd-ur/GvEM
|
abd-ur
| 2025-08-07T16:19:30Z | 0 | 1 | null |
[
"RepresentationLearning",
"Genomics",
"Variant",
"Classiciation",
"Mutations",
"Embedding",
"VariantClassificaion",
"token-classification",
"en",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-08-04T06:58:26Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/abd-ur/GvEM/972f2c524e4d73f440e2f41582bab2f95e486afa/README.md?%2Fabd-ur%2FGvEM%2Fresolve%2Fmain%2FREADME.md=&etag=%2258aa14a6fa8df9ae632eafefa1c2d594dce32885%22
|
Aasher/Image2Prompt_Generator_Gemma_3n_2B
|
Aasher
| 2025-08-07T16:07:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T16:03:48Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/Aasher/Image2Prompt_Generator_Gemma_3n_2B/2b9fcb9f68feee54e2a69f1683d228838f3ceaa1/README.md?%2FAasher%2FImage2Prompt_Generator_Gemma_3n_2B%2Fresolve%2Fmain%2FREADME.md=&etag=%2248004fa056990a6d78324fd16bbb911013e6c633%22
|
roeker/blockassist-bc-quick_wiry_owl_1754582553
|
roeker
| 2025-08-07T16:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T16:03:46Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/roeker/blockassist-bc-quick_wiry_owl_1754582553/b823ec5c3180150b2ddbaab11162b38fc59751e4/README.md?%2Froeker%2Fblockassist-bc-quick_wiry_owl_1754582553%2Fresolve%2Fmain%2FREADME.md=&etag=%22a8989985dc836a555f0378fef0f54e5ce93d4a27%22
|
seraphimzzzz/689299
|
seraphimzzzz
| 2025-08-07T15:54:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T15:54:11Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/seraphimzzzz/689299/5415623abfe9d794a3628c5b5825971138033647/README.md?%2Fseraphimzzzz%2F689299%2Fresolve%2Fmain%2FREADME.md=&etag=%2258878b6ea1574c326e935d011f464cb1696c7678%22
|
OscarGD6/qwen2-vl-text-prompt-coco
|
OscarGD6
| 2025-08-07T15:34:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T15:29:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MB55/LLaMleinclassifier
|
MB55
| 2025-08-07T15:03:56Z | 0 | 0 | null |
[
"safetensors",
"base_model:LSX-UniWue/LLaMmlein_7B_chat",
"base_model:finetune:LSX-UniWue/LLaMmlein_7B_chat",
"license:mit",
"region:us"
] | null | 2025-04-27T17:13:54Z |
---
base_model: LSX-UniWue/LLaMmlein_7B_chat
license: mit
---
# LLäMmlein QLoRA – Grounding Act Classification
This model is a fine-tuned version of [LSX-UniWue/LLaMmlein_7B_chat](https://huggingface.co/LSX-UniWue/LLaMmlein_7B_chat), optimized using QLoRA for efficient binary classification of German dialogue utterances into:
- **advance**: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations)
- **non_advance**: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments)
---
## Use Cases
- Dialogue system analysis
- Teacher-student interaction classification
- Grounding in institutional advising or classroom discourse
---
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MB55/qlora_new")
tokenizer = AutoTokenizer.from_pretrained("MB55/qlora_new")
inputs = tokenizer("Can you explain it again?", return_tensors="pt")
outputs = model(**inputs)
prediction = outputs.logits.argmax(dim=-1)
print(prediction) # 0 = non_advance, 1 = advance
|
mehdirafiei/bert_resume_category_classifier
|
mehdirafiei
| 2025-08-07T15:00:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T15:00:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
F-Fer/smolvla_test_grab_and_place
|
F-Fer
| 2025-08-07T14:48:32Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:F-Fer/test-grab-and-place-1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T14:48:19Z |
---
base_model: lerobot/smolvla_base
datasets: F-Fer/test-grab-and-place-1
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Yujie-AI/Yi_34B_LLaVA-linear-coeff1.0
|
Yujie-AI
| 2025-08-07T14:42:11Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T23:27:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sinequa/answer-finder-v1-S-en
|
sinequa
| 2025-08-07T14:38:30Z | 319 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-10T15:11:37Z |
---
language:
- en
---
# Model Card for `answer-finder-v1-S-en`
This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.
Model name: `answer-finder-v1-S-en`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:--------------------------------------------------------------|-------:|
| F1 Score on SQuAD v2 with Hugging Face evaluation pipeline | 79.4 |
| F1 Score on SQuAD v2 with Haystack evaluation pipeline | 79.5 |
## Inference Time
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 10 ms |
| NVIDIA A10 | FP32 | 3 ms | 43 ms |
| NVIDIA T4 | FP16 | 2 ms | 22 ms |
| NVIDIA T4 | FP32 | 5 ms | 130 ms |
| NVIDIA L4 | FP16 | 2 ms | 12 ms |
| NVIDIA L4 | FP32 | 5 ms | 62 ms |
**Note that the Answer Finder models are only used at query time.**
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 550 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
## Model Details
### Overview
- Number of parameters: 33 million
- Base language model: [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased)
- Insensitive to casing and accents
### Training Data
- [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/)
|
CycloneDX/cdx1-mlx-4bit
|
CycloneDX
| 2025-08-07T14:38:05Z | 9 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"conversational",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-02-07T10:51:25Z |
---
base_model: unsloth/Qwen2.5-Coder-14B-Instruct
language:
- en
library_name: mlx
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- mlx
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Logic Category Comparison
The Logic category tests reasoning and problem-solving skills. The table below compares the accuracy of nine models on these tasks.
| Model | Accuracy (%) |
| :--- | :--- |
| **cdx1-mlx-8bit** | **46.04** |
| **cdx1-pro-mlx-8bit** | **73.17** |
| gemini-2.5-pro | 93.60 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
| deepthink-r1 | 89.63 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
**Summary of Results:**
- **Top Performer:** `gemini-2.5-pro` achieved the highest accuracy at 93.6%.
- **High Performers:** A group of models demonstrated strong reasoning, including `deepthink-r1` (89.6%), `deepseek-r1` (82.9%), and `gpt-oss-120b` (80.5%).
- **Specialized Model Performance:** `cdx1-pro` (30B parameters) performed competitively at 73.2%. The score for `cdx1` (14B parameters) was 46.0%, a result attributed primarily to context length limitations rather than a fundamental deficiency in logic.
- **Performance Tiers:** The results indicate distinct performance tiers, with a significant gap between the top-performing models (>80%) and others.
### Specification Category Comparison
The Spec category tests the recall of factual information from technical specifications.
| Model | Accuracy (%) |
| :--- | :--- |
| **cdx1-mlx-8bit** | **83.52** |
| **cdx1-pro-mlx-8bit** | **98.3** |
| gemini-2.5-pro | 100 |
| o4-mini-high | 0 |
| qwen3-coder-480B | 90.34 |
| deepthink-r1 | 12.36 |
| deepseek-r1 | 98.58 |
| gpt-oss-120b | 89.2 |
| gpt-oss-20b | 9.09 |
**Summary of Results:**
- **Near-Perfect Recall:** `gemini-2.5-pro` (100%), `deepseek-r1` (98.6%), and `cdx1-pro` (98.3%) demonstrated exceptional performance.
- **Behavioral Failures:** Three models scored poorly due to operational issues rather than a lack of knowledge. `o4-mini-high` (0%) refused to answer, while `deepthink-r1` (12.4%) and `gpt-oss-20b` (9.1%) answered only a small fraction of questions.
- **`cdx1` Performance:** The smaller `cdx1` model scored 83.5%. Its performance was negatively affected by a systematic misunderstanding of certain technical terms, highlighting the challenge of ensuring factual accuracy in highly specialized domains.
### Other Categories
Performance in additional technical categories is summarized below.
| Category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit |
| -------- | ------------- | ----------------- |
| DevOps | 87.46% | 96.1% |
| Docker | 89.08% | 100% |
| Linux | 90.6% | 95.8% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
sinequa/passage-ranker.nectarine
|
sinequa
| 2025-08-07T14:35:09Z | 13 | 0 | null |
[
"safetensors",
"bert",
"ar",
"de",
"en",
"es",
"fr",
"it",
"ja",
"ko",
"nl",
"pt",
"zh",
"pl",
"arxiv:1901.04085",
"arxiv:1611.09268",
"region:us"
] | null | 2025-06-30T08:07:47Z |
---
language:
- ar
- de
- en
- es
- fr
- it
- ja
- ko
- nl
- pt
- zh
- pl
---
# Model Card for `passage-ranker.nectarine`
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.
Model name: `passage-ranker.nectarine`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
- Italian
- Dutch
- Japanese
- Portuguese
- Chinese (simplified)
- Polish
- Arabic
- Korean
Besides the aforementioned languages, basic support can be expected for additional 93 languages that were used during the pretraining of the base model (see
[list of languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)).
## Scores
| Metric | Value |
|:----------------------------|------:|
| English Relevance (NDCG@10) | 0.455 |
| Arabic Relevance (NDCG@10) | 0.250 |
| Korean Relevance (NDCG@10) | 0.232 |
Note that the relevance score is computed as an average over several retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 2 ms | 28 ms |
| NVIDIA A10 | FP32 | 4 ms | 82 ms |
| NVIDIA T4 | FP16 | 3 ms | 65 ms |
| NVIDIA T4 | FP32 | 14 ms | 369 ms |
| NVIDIA L4 | FP16 | 3 ms | 38 ms |
| NVIDIA L4 | FP32 | 5 ms | 123 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 850 MiB |
| FP32 | 1200 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 167 million
- Base language model: [Multilingual BERT-Base](https://huggingface.co/bert-base-multilingual-uncased)
- Insensitive to casing and accents
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
### Training Data
- MS MARCO Passage Ranking
([Paper](https://arxiv.org/abs/1611.09268),
[Official Page](https://microsoft.github.io/msmarco/),
[English & translated datasets on the HF dataset hub](https://huggingface.co/datasets/unicamp-dl/mmarco), [translated dataset in Polish on the HF dataset hub](https://huggingface.co/datasets/clarin-knext/msmarco-pl))
- Original English dataset
- Translated datasets for the other nine supported languages
### Evaluation Metrics
##### English
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | NDCG@10 |
|:------------------|--------:|
| Average | 0.455 |
| | |
| Arguana | 0.501 |
| CLIMATE-FEVER | 0.200 |
| DBPedia Entity | 0.353 |
| FEVER | 0.723 |
| FiQA-2018 | 0.299 |
| HotpotQA | 0.657 |
| MS MARCO | 0.406 |
| NFCorpus | 0.299 |
| NQ | 0.449 |
| Quora | 0.751 |
| SCIDOCS | 0.136 |
| SciFact | 0.605 |
| TREC-COVID | 0.694 |
| Webis-Touche-2020 | 0.296 |
#### Arabic
This model has arabic capacities, that are being evaluated over a home made translation of Msmarco with BM25 as the first stage retrieval.
| Dataset | NDCG@10 |
|:--------------|--------:|
| msmarco-ar | 0.250 |
#### Korean
This model has korean capacities, that are being evaluated over a home made translation of Msmarco with BM25 as the first stage retrieval.
| Dataset | NDCG@10 |
|:--------------|--------:|
| msmarco-ko | 0.232 |
#### Other languages
We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its
multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics
for the existing languages.
| Language | NDCG@10 |
|:----------------------|--------:|
| French | 0.390 |
| German | 0.371 |
| Spanish | 0.447 |
| Japanese | 0.488 |
| Chinese (simplified) | 0.429 |
|
CycloneDX/cdx1-pro-mlx-8bit
|
CycloneDX
| 2025-08-07T14:33:22Z | 10 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"conversational",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-02T12:21:50Z |
---
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
language:
- en
library_name: mlx
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- mlx
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Logic Category Comparison
The Logic category tests reasoning and problem-solving skills. The table below compares the accuracy of nine models on these tasks.
| Model | Accuracy (%) |
| :--- | :--- |
| **cdx1-mlx-8bit** | **46.04** |
| **cdx1-pro-mlx-8bit** | **73.17** |
| gemini-2.5-pro | 93.60 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
| deepthink-r1 | 89.63 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
**Summary of Results:**
- **Top Performer:** `gemini-2.5-pro` achieved the highest accuracy at 93.6%.
- **High Performers:** A group of models demonstrated strong reasoning, including `deepthink-r1` (89.6%), `deepseek-r1` (82.9%), and `gpt-oss-120b` (80.5%).
- **Specialized Model Performance:** `cdx1-pro` (30B parameters) performed competitively at 73.2%. The score for `cdx1` (14B parameters) was 46.0%, a result attributed primarily to context length limitations rather than a fundamental deficiency in logic.
- **Performance Tiers:** The results indicate distinct performance tiers, with a significant gap between the top-performing models (>80%) and others.
### Specification Category Comparison
The Spec category tests the recall of factual information from technical specifications.
| Model | Accuracy (%) |
| :--- | :--- |
| **cdx1-mlx-8bit** | **83.52** |
| **cdx1-pro-mlx-8bit** | **98.3** |
| gemini-2.5-pro | 100 |
| o4-mini-high | 0 |
| qwen3-coder-480B | 90.34 |
| deepthink-r1 | 12.36 |
| deepseek-r1 | 98.58 |
| gpt-oss-120b | 89.2 |
| gpt-oss-20b | 9.09 |
**Summary of Results:**
- **Near-Perfect Recall:** `gemini-2.5-pro` (100%), `deepseek-r1` (98.6%), and `cdx1-pro` (98.3%) demonstrated exceptional performance.
- **Behavioral Failures:** Three models scored poorly due to operational issues rather than a lack of knowledge. `o4-mini-high` (0%) refused to answer, while `deepthink-r1` (12.4%) and `gpt-oss-20b` (9.1%) answered only a small fraction of questions.
- **`cdx1` Performance:** The smaller `cdx1` model scored 83.5%. Its performance was negatively affected by a systematic misunderstanding of certain technical terms, highlighting the challenge of ensuring factual accuracy in highly specialized domains.
### Other Categories
Performance in additional technical categories is summarized below.
| Category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit |
| -------- | ------------- | ----------------- |
| DevOps | 87.46% | 96.1% |
| Docker | 89.08% | 100% |
| Linux | 90.6% | 95.8% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
te4bag/GRIT-Full-BoolQ-llama-3.2-3B-Energy-0.9
|
te4bag
| 2025-08-07T14:32:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"grit",
"lora",
"qlora",
"unsloth",
"instruction-tuning",
"fine-tuned",
"Sequence Classification",
"en",
"dataset:google/boolq",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T14:32:22Z |
---
tags:
- llama
- grit
- lora
- qlora
- unsloth
- instruction-tuning
- fine-tuned
base_model: meta-llama/Llama-3.2-3B
library_name: peft
license: apache-2.0
datasets:
- google/boolq
language:
- en
pipeline_tag: Sequence Classification
---
# meta-llama/Llama-3.2-3B Fine-tuned with GRIT and QLoRA
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) using the **GRIT** (Geometric Reprojection Instruction Tuning) algorithm and **QLoRA** on the [google/boolq dataset](https://huggingface.co/datasets/google/boolq).
The base model is quantized to 4-bit (NF4) to enable efficient fine-tuning.
## 🚀 Training Details
### GRIT Algorithm
- **K-FAC Updates**: Every 150 steps (adaptive) for second-order preconditioning.
- **Neural Reprojection**: Every 150 steps (adaptive) for rank optimization.
- **Rank Adaptation**: Enabled (Threshold: 0.9, Min Rank: 4).
- **Optimized LoRA Modules**: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']
### Fine-tuning Configuration
- **Base Model**: meta-llama/Llama-3.2-3B
- **Quantization**: 4-bit (NF4) with bf16 compute.
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **Batch Size**: 8 (per device)
- **Gradient Accumulation**: 2 (Effective batch = 16)
- **Learning Rate**: 2.0e-05
- **Precision**: bf16 mixed precision
- **Sequence Length**: 2048 tokens
- **Gradient Checkpointing**: Enabled
### Performance Improvements
- ✅ **Faster Convergence**: K-FAC preconditioning aligns updates with curvature.
- ✅ **Memory-Efficient**: 4-bit quantization (QLoRA) and gradient checkpointing used.
- ✅ **Adaptive Rank**: Dynamically prunes LoRA rank to improve parameter efficiency.
## 📊 Training Metrics
- **Total Steps**: 590
- **Final Loss**: N/A
- **Trainable Params**: 24,320,000
## 📝 Algorithm Details
- **K-FAC Preconditioning** (Natural Gradient) and **Neural Reprojection** as per GRIT method.
- **Memory Efficient**: Covariance matrices on CPU to reduce GPU load.
## 🏆 Results
In benchmark comparisons, GRIT has shown **faster convergence and better stability** than standard LoRA or fine-tuning, making it well-suited for efficient single-epoch training. The use of Unsloth further accelerates this process.
## 📝 Citation
If you use this model, please cite the original GRIT paper and:
```bibtex
@misc{grit-lora-Llama-3.2-3B-boolq},
title={ meta-llama/Llama-3.2-3B Fine-tuned with GRIT on google/boolq },
author={te4bag},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/te4bag/GRIT-Full-BoolQ-llama-3.2-3B-Energy-0.9}
}
```
## ⚖️ License
This model inherits the Apache 2.0 license.
|
UzzyDizzy/ppo-SnowballTarget
|
UzzyDizzy
| 2025-08-07T14:07:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-07T14:07:39Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: UzzyDizzy/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pepijn223/lekiwi_record_test_333_migrated
|
pepijn223
| 2025-08-07T13:59:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:unknown",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T13:59:03Z |
---
datasets: unknown
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
byteakp/GojoReasoner-phi2
|
byteakp
| 2025-08-07T13:55:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"qlora",
"reasoning",
"gojo-satoru",
"jujutsu-kaisen",
"en",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T13:25:07Z |
---
library_name: transformers
tags:
- text-generation
- phi
- qlora
- reasoning
- gojo-satoru
- jujutsu-kaisen
license: mit
language:
- en
base_model:
- microsoft/phi-2
pipeline_tag: text-generation
---
# GojoReasoner-phi2
**GojoReasoner** is a fine-tuned version of `microsoft/phi-2` that solves math and logic problems with the arrogant, confident personality of **Gojo Satoru** from *Jujutsu Kaisen*. Throughout heaven and earth, this model alone is the honored one... for helping with your homework. 🧠✨
## Model Details
### Model Description
This is a 2.7B parameter causal language model, fine-tuned using the QLoRA method for memory-efficient training. Its purpose is to serve as a specialized assistant for reasoning tasks, providing step-by-step (Chain-of-Thought) explanations for math problems and Python code solutions, all while maintaining its unique in-character persona.
- **Developed by:** Aman Kumar Pandey (Hugging Face user: `byteakp`), from Lovely Professional University, Phagwara.
- **Model type:** Causal Language Model
- **Language(s) (NLP):** English
- **License:** MIT License
- **Finetuned from model:** `microsoft/phi-2`
### Model Sources
- **Repository:** [https://huggingface.co/byteakp/GojoReasoner-phi2](https://huggingface.co/byteakp/GojoReasoner-phi2)
---
## Uses
### Direct Use
This model is intended for direct use as an interactive reasoning and coding assistant. It's best for educational purposes, entertainment, or for anyone who enjoys a challenge with a side of sarcasm. For best results, use the specific instruction format shown below.
### Out-of-Scope Use
This model is **not** a general-purpose chatbot and will perform poorly in the following scenarios:
- Formal, polite, or empathetic conversations.
- High-stakes financial, medical, or legal advice.
- Factual queries outside of its narrow training domain (math, logic, basic CS).
- Creative writing or general-purpose instruction following.
---
## Bias, Risks, and Limitations
- **Personality Bias:** The model is **intentionally** designed to be arrogant, sarcastic, and condescending. Its responses may be perceived as rude and are not suitable for users seeking a friendly or supportive assistant.
- **Factual Accuracy:** While fine-tuned for reasoning, it is still a 2.7B parameter model and **can make mathematical errors** or "hallucinate" incorrect solutions. **Always verify its answers for critical applications.** It presents all answers, correct or not, with extreme confidence.
- **Limited Scope:** The model's knowledge is confined to its training data. It has no awareness of current events.
---
## How to Get Started with the Model
Use the code below to get started with the model. Remember to use the specified prompt format.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "byteakp/GojoReasoner-phi2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# For GPU usage
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
question = "There are 5 people in a room. How many handshakes are possible if everyone shakes hands with everyone else once?"
# Use the required prompt template
prompt = f"### User:\n{question}\n\n### GojoReasoner:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generation arguments
outputs = model.generate(
**inputs,
max_new_tokens=256,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
response_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract only the answer
answer = response_text.split("### GojoReasoner:")[1].strip()
print(answer)
```
## Training Details
### Training Data
The model was fine-tuned on a small, custom dataset named `gojo_dataset.jsonl`. This dataset contains approximately 100 examples of question-answer pairs. Each question is a math, logic, or coding problem, and each answer is a step-by-step, Chain-of-Thought response written in the persona of Gojo Satoru. The dataset was specifically curated to inject this personality and reasoning style, rather than for comprehensive knowledge.
### Training Procedure
The model was fine-tuned using the QLoRA (Quantized Low-Rank Adaptation) method for memory efficiency.
#### Preprocessing
The raw data was formatted into a single text field for each entry, following a `### User:\n{question}\n\n### GojoReasoner:\n{answer}` template. This structured format is optimized for Supervised Fine-Tuning (SFT) with the `trl` library.
#### Training Hyperparameters
- **Training regime:** `bf16` mixed precision
- **Quantization:** 4-bit (`nf4`)
- **LoRA `r` (rank):** 16
- **LoRA `alpha`:** 32
- **Learning Rate:** 2e-4
- **Max Steps:** 50
- **Device:** Single NVIDIA T4 GPU
---
## Evaluation
Formal, quantitative evaluation against standard academic benchmarks was not performed for this model. The project's primary goal was to successfully transfer a specific personality and a Chain-of-Thought reasoning style onto a base model for a narrow set of tasks. The model's success is judged qualitatively based on its ability to generate coherent, in-character responses to novel problems similar to its training data.
---
## Environmental Impact
- **Hardware Type:** 1x NVIDIA T4 GPU
- **Hours used:** < 1 Hour
- **Cloud Provider:** Google Colab
- **Carbon Emitted:** Carbon emissions are minimal due to the very short training duration and the use of the highly efficient QLoRA method, which significantly reduces the computational overhead compared to full fine-tuning.
---
## Technical Specifications
### Model Architecture and Objective
The model is based on the `microsoft/phi-2` architecture, which is a decoder-only transformer. The fine-tuning objective was Supervised Fine-Tuning (SFT) to minimize the cross-entropy loss between the model's predictions and the target answers in the custom dataset.
### Compute Infrastructure
- **Hardware:** 1x NVIDIA T4 GPU (16 GB VRAM) provided by Google Colab.
- **Software:** Python, PyTorch, Hugging Face `transformers`, `peft`, `accelerate`, `bitsandbytes`, and `trl`.
---
## Citation
If you use this model in your work, please consider citing it:
```bibtex
@misc{GojoReasoner,
author = {Aman Kumar Pandey},
title = {GojoReasoner: A Gojo Satoru-themed Reasoning Model},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{[https://huggingface.co/byteakp/GojoReasoner-phi2](https://huggingface.co/byteakp/GojoReasoner-phi2)}}
}
|
Eddie1015/ppo-LunarLander-v2
|
Eddie1015
| 2025-08-07T13:54:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T13:52:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.72 +/- 15.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Lacaille-MoT-4B-Supreme2-GGUF
|
mradermacher
| 2025-08-07T13:35:26Z | 2,172 | 1 |
transformers
|
[
"transformers",
"gguf",
"moe",
"trl",
"mot",
"code",
"science",
"math",
"mixture-of-thoughts",
"supreme2",
"stem",
"text-generation-inference",
"reasoning",
"vlm",
"en",
"zh",
"dataset:open-r1/Mixture-of-Thoughts",
"dataset:nvidia/OpenCodeReasoning",
"base_model:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"base_model:quantized:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-02T09:02:10Z |
---
base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2
datasets:
- open-r1/Mixture-of-Thoughts
- nvidia/OpenCodeReasoning
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
- trl
- mot
- code
- science
- math
- mixture-of-thoughts
- supreme2
- stem
- text-generation-inference
- reasoning
- vlm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lacaille-MoT-4B-Supreme2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
giovannidemuri/llama3b-llamab8-er-afg-v64-seed2-hx-alpaca-fpt
|
giovannidemuri
| 2025-08-07T13:29:05Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T01:29:12Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v64-seed2-hx-alpaca-fpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v64-seed2-hx-alpaca-fpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.0
|
Borsa356/costum_dataset_1
|
Borsa356
| 2025-08-07T13:27:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T13:26:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NexaAI/parakeet-tdt-0.6b-v2-MLX
|
NexaAI
| 2025-08-07T13:00:05Z | 118 | 0 |
mlx
|
[
"mlx",
"safetensors",
"parakeet",
"automatic-speech-recognition",
"speech",
"audio",
"FastConformer",
"Conformer",
"Parakeet",
"arxiv:2305.05084",
"arxiv:2304.06795",
"arxiv:2406.00899",
"arxiv:2410.01036",
"arxiv:2505.13404",
"base_model:nvidia/parakeet-tdt-0.6b-v2",
"base_model:finetune:nvidia/parakeet-tdt-0.6b-v2",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-07-08T07:01:52Z |
---
library_name: mlx
tags:
- mlx
- automatic-speech-recognition
- speech
- audio
- FastConformer
- Conformer
- Parakeet
license: cc-by-4.0
pipeline_tag: automatic-speech-recognition
base_model: nvidia/parakeet-tdt-0.6b-v2
---
# NexaAI/parakeet-tdt-0.6b-v2-MLX
## Quickstart
Run them directly with [nexa-sdk](https://github.com/NexaAI/nexa-sdk) installed
In nexa-sdk CLI:
```bash
NexaAI/parakeet-tdt-0.6b-v2-MLX
```
## Overview
`parakeet-tdt-0.6b-v2` is a 600-million-parameter automatic speech recognition (ASR) model designed for high-quality English transcription, featuring support for punctuation, capitalization, and accurate timestamp prediction. Try Demo here: https://huggingface.co/spaces/nvidia/parakeet-tdt-0.6b-v2
This XL variant of the FastConformer architecture integrates the TDT decoder and is trained with full attention, enabling efficient transcription of audio segments up to 24 minutes in a single pass. The model achieves an RTFx of 3380 on the HF-Open-ASR leaderboard with a batch size of 128. Note: *RTFx Performance may vary depending on dataset audio duration and batch size.*
**Key Features**
- Accurate word-level timestamp predictions
- Automatic punctuation and capitalization
- Robust performance on spoken numbers, and song lyrics transcription
For more information, refer to the [Model Architecture](#model-architecture) section and the [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
This model is ready for commercial/non-commercial use.
## Benchmark Results
#### Huggingface Open-ASR-Leaderboard Performance
The performance of Automatic Speech Recognition (ASR) models is measured using Word Error Rate (WER). Given that this model is trained on a large and diverse dataset spanning multiple domains, it is generally more robust and accurate across various types of audio.
### Base Performance
The table below summarizes the WER (%) using a Transducer decoder with greedy decoding (without an external language model):
| **Model** | **Avg WER** | **AMI** | **Earnings-22** | **GigaSpeech** | **LS test-clean** | **LS test-other** | **SPGI Speech** | **TEDLIUM-v3** | **VoxPopuli** |
|:-------------|:-------------:|:---------:|:------------------:|:----------------:|:-----------------:|:-----------------:|:------------------:|:----------------:|:---------------:|
| parakeet-tdt-0.6b-v2 | 6.05 | 11.16 | 11.15 | 9.74 | 1.69 | 3.19 | 2.17 | 3.38 | 5.95 | - |
### Noise Robustness
Performance across different Signal-to-Noise Ratios (SNR) using MUSAN music and noise samples:
| **SNR Level** | **Avg WER** | **AMI** | **Earnings** | **GigaSpeech** | **LS test-clean** | **LS test-other** | **SPGI** | **Tedlium** | **VoxPopuli** | **Relative Change** |
|:---------------|:-------------:|:----------:|:------------:|:----------------:|:-----------------:|:-----------------:|:-----------:|:-------------:|:---------------:|:-----------------:|
| Clean | 6.05 | 11.16 | 11.15 | 9.74 | 1.69 | 3.19 | 2.17 | 3.38 | 5.95 | - |
| SNR 50 | 6.04 | 11.11 | 11.12 | 9.74 | 1.70 | 3.18 | 2.18 | 3.34 | 5.98 | +0.25% |
| SNR 25 | 6.50 | 12.76 | 11.50 | 9.98 | 1.78 | 3.63 | 2.54 | 3.46 | 6.34 | -7.04% |
| SNR 5 | 8.39 | 19.33 | 13.83 | 11.28 | 2.36 | 5.50 | 3.91 | 3.91 | 6.96 | -38.11% |
### Telephony Audio Performance
Performance comparison between standard 16kHz audio and telephony-style audio (using μ-law encoding with 16kHz→8kHz→16kHz conversion):
| **Audio Format** | **Avg WER** | **AMI** | **Earnings** | **GigaSpeech** | **LS test-clean** | **LS test-other** | **SPGI** | **Tedlium** | **VoxPopuli** | **Relative Change** |
|:-----------------|:-------------:|:----------:|:------------:|:----------------:|:-----------------:|:-----------------:|:-----------:|:-------------:|:---------------:|:-----------------:|
| Standard 16kHz | 6.05 | 11.16 | 11.15 | 9.74 | 1.69 | 3.19 | 2.17 | 3.38 | 5.95 | - |
| μ-law 8kHz | 6.32 | 11.98 | 11.16 | 10.02 | 1.78 | 3.52 | 2.20 | 3.38 | 6.52 | -4.10% |
These WER scores were obtained using greedy decoding without an external language model. Additional evaluation details are available on the [Hugging Face ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard).
## Reference
- **Original model card**: [nvidia/parakeet-tdt-0.6b-v2](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2)
- [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
- [Efficient Sequence Transduction by Jointly Predicting Tokens and Durations](https://arxiv.org/abs/2304.06795)
- [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
- [Youtube-commons: A massive open corpus for conversational and multimodal data](https://huggingface.co/blog/Pclanglais/youtube-commons)
- [Yodas: Youtube-oriented dataset for audio and speech](https://arxiv.org/abs/2406.00899)
- [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
- [MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages](https://arxiv.org/abs/2410.01036)
- [Granary: Speech Recognition and Translation Dataset in 25 European Languages](https://arxiv.org/pdf/2505.13404)
|
nqzfaizal77ai/nqzora-axiom-phyllo-init-550m
|
nqzfaizal77ai
| 2025-08-07T12:58:16Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-10-06T19:54:15Z |
---
library_name: transformers
inference: false
license:
- mit
- openrail
base_model:
- microsoft/Phi-3.5-mini-instruct
---
change name pruned microsoft/Phi-3.5-mini-instruct model into init model with new name to avoid trademark infringement
<img src="nqzora-axiom-phyllo-cover.jfif" alt="NQZora Axiom Phyllo Cover" style="width: 200px; height: 200px; margin-right:auto; margin-left:auto;">
Note: Image created with [Google SDXL](https://huggingface.co/spaces/google/sdxl)
# New Quantum Zone Technology Model
NQZora Axiom Phyllo
- Axiom represents a self-evident truth, reflecting the philosophical and educational aspects
- Phyllo (a combination of "phi" and "folio," suggesting growth and education)
- NQZora (combining "NQZ" with "zora", meaning "dawn" in Slavic languages, symbolizing a new beginning)
|
Alexjiuqiaoyu/imChat
|
Alexjiuqiaoyu
| 2025-08-07T12:57:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T12:46:52Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sagata007/kid
|
sagata007
| 2025-08-07T12:57:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T12:57:07Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: kid
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# kid
<Gallery />
## Model description
## Trigger words
You should use `kid` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/sagata007/kid/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
abdiharyadi/deberta-v3-large-ft-icar-a-v1.1
|
abdiharyadi
| 2025-08-07T12:53:32Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T08:59:41Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: deberta-v3-large-ft-icar-a-v1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-ft-icar-a-v1.1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1112
- Accuracy: 0.9005
- Precision: 0.8185
- Recall: 0.7185
- F1: 0.7384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 3
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.7378 | 1.0 | 871 | 0.7618 | 0.7672 | 0.4687 | 0.5386 | 0.4898 |
| 2.1059 | 2.0 | 1742 | 0.8349 | 0.7963 | 0.7183 | 0.5808 | 0.5593 |
| 1.4544 | 3.0 | 2613 | 0.9220 | 0.8683 | 0.8215 | 0.6800 | 0.6943 |
| 1.17 | 4.0 | 3484 | 0.9522 | 0.8790 | 0.8263 | 0.6935 | 0.7109 |
| 0.8619 | 5.0 | 4355 | 1.0859 | 0.8806 | 0.8369 | 0.6833 | 0.7117 |
| 0.7009 | 6.0 | 5226 | 1.0515 | 0.8867 | 0.8259 | 0.7140 | 0.7209 |
| 0.469 | 7.0 | 6097 | 1.1400 | 0.8867 | 0.8261 | 0.7105 | 0.7201 |
| 0.3994 | 8.0 | 6968 | 1.2983 | 0.8790 | 0.8169 | 0.6872 | 0.7139 |
| 0.2843 | 9.0 | 7839 | 1.1112 | 0.9005 | 0.8185 | 0.7185 | 0.7384 |
| 0.1854 | 10.0 | 8710 | 1.3248 | 0.8913 | 0.8321 | 0.7084 | 0.7171 |
| 0.1884 | 11.0 | 9581 | 1.3623 | 0.8913 | 0.8295 | 0.7136 | 0.7207 |
| 0.1156 | 12.0 | 10452 | 1.4269 | 0.8897 | 0.8279 | 0.7124 | 0.7190 |
| 0.0858 | 13.0 | 11323 | 1.3306 | 0.8943 | 0.8058 | 0.7136 | 0.7240 |
| 0.1342 | 14.0 | 12194 | 1.6327 | 0.8836 | 0.8055 | 0.7093 | 0.7129 |
| 0.0651 | 15.0 | 13065 | 1.4444 | 0.8928 | 0.8169 | 0.7127 | 0.7225 |
| 0.0761 | 16.0 | 13936 | 1.7500 | 0.8806 | 0.8147 | 0.6936 | 0.7112 |
| 0.0575 | 17.0 | 14807 | 1.6015 | 0.8943 | 0.8325 | 0.7168 | 0.7263 |
| 0.1177 | 18.0 | 15678 | 1.5393 | 0.9005 | 0.8267 | 0.7156 | 0.7306 |
| 0.0473 | 19.0 | 16549 | 1.5736 | 0.8974 | 0.8219 | 0.7181 | 0.7292 |
| 0.0327 | 20.0 | 17420 | 1.8080 | 0.8882 | 0.8118 | 0.7111 | 0.7184 |
| 0.0277 | 21.0 | 18291 | 1.7315 | 0.8928 | 0.8189 | 0.7155 | 0.7262 |
| 0.0768 | 22.0 | 19162 | 1.8123 | 0.8882 | 0.8200 | 0.7111 | 0.7295 |
| 0.0698 | 23.0 | 20033 | 2.0307 | 0.8760 | 0.8149 | 0.7050 | 0.7060 |
| 0.0191 | 24.0 | 20904 | 1.7721 | 0.9005 | 0.8393 | 0.7178 | 0.7279 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
ekiprop/CoLA-Fisher-GLoRA-p30-seed30
|
ekiprop
| 2025-08-07T12:51:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T12:48:30Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-GLoRA-p30-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-GLoRA-p30-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4364
- Matthews Correlation: 0.5395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6309 | 0.1866 | 50 | 0.6066 | 0.0 |
| 0.6012 | 0.3731 | 100 | 0.5911 | 0.0 |
| 0.5587 | 0.5597 | 150 | 0.5374 | 0.1860 |
| 0.5135 | 0.7463 | 200 | 0.4900 | 0.4094 |
| 0.4607 | 0.9328 | 250 | 0.5696 | 0.3891 |
| 0.4715 | 1.1194 | 300 | 0.4906 | 0.4331 |
| 0.4555 | 1.3060 | 350 | 0.4826 | 0.4637 |
| 0.4689 | 1.4925 | 400 | 0.5486 | 0.4069 |
| 0.473 | 1.6791 | 450 | 0.4883 | 0.4412 |
| 0.438 | 1.8657 | 500 | 0.5060 | 0.4414 |
| 0.4445 | 2.0522 | 550 | 0.4668 | 0.4911 |
| 0.4167 | 2.2388 | 600 | 0.4763 | 0.4993 |
| 0.4259 | 2.4254 | 650 | 0.4671 | 0.5046 |
| 0.4147 | 2.6119 | 700 | 0.4848 | 0.5019 |
| 0.4067 | 2.7985 | 750 | 0.4742 | 0.5180 |
| 0.43 | 2.9851 | 800 | 0.4611 | 0.5126 |
| 0.4172 | 3.1716 | 850 | 0.4364 | 0.5395 |
| 0.3946 | 3.3582 | 900 | 0.4717 | 0.5285 |
| 0.4004 | 3.5448 | 950 | 0.5258 | 0.4610 |
| 0.3982 | 3.7313 | 1000 | 0.4692 | 0.5179 |
| 0.3929 | 3.9179 | 1050 | 0.4741 | 0.5100 |
| 0.3896 | 4.1045 | 1100 | 0.4713 | 0.5180 |
| 0.3769 | 4.2910 | 1150 | 0.4690 | 0.5206 |
| 0.376 | 4.4776 | 1200 | 0.4594 | 0.5339 |
| 0.3981 | 4.6642 | 1250 | 0.4695 | 0.5260 |
| 0.3765 | 4.8507 | 1300 | 0.4773 | 0.5180 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ScorpieCur/SmolLM3-3B-Base-unsloth-bnb-4bit
|
ScorpieCur
| 2025-08-07T12:46:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smollm3",
"text-generation",
"transformers.js",
"unsloth",
"en",
"fr",
"es",
"it",
"pt",
"zh",
"ar",
"ru",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:quantized:HuggingFaceTB/SmolLM3-3B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T12:46:39Z |
---
base_model:
- HuggingFaceTB/SmolLM3-3B-Base
library_name: transformers
license: apache-2.0
language:
- en
- fr
- es
- it
- pt
- zh
- ar
- ru
tags:
- transformers.js
- unsloth
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# SmolLM3

## Table of Contents
1. [Model Summary](#model-summary)
2. [How to use](#how-to-use)
3. [Evaluation](#evaluation)
4. [Training](#training)
5. [Limitations](#limitations)
6. [License](#license)
## Model Summary
SmolLM3 is a 3B parameter language model designed to push the boundaries of small models. It supports 6 languages, advanced reasoning and long context. SmolLM3 is a fully open model that offers strong performance at the 3B–4B scale.

**SmolLM3-3B-Base** is the base model after pretraining, you can find the instruct model at [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B).
The model is a decoder-only transformer using GQA and NoPE, it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).
### Key features
- Instruct model optimized for **hybrid reasoning**
- **Fully open model**: open weights + full training details including public data mixture and training configs
- **Long context:** Trained on 64k context and suppots up to **128k tokens** using YARN extrapolation
- **Multilingual**: 6 natively supported (English, French, Spanish, German, Italian, and Portuguese)
For more details refer to our blog post: https://hf.co/blog/smollm3
### How to use
The modeling code for SmolLM3 is available in transformers `v4.53.0`, so make sure to upgrade your transformers version. You can also load the model with the latest `vllm` which uses transformers as a backend.
```bash
pip install -U transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM3-3B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
For local inference, you can use `llama.cpp`, `ONNX`, `MLX` and `MLC`. You can find quantized checkpoints in this collection (https://huggingface.co/collections/HuggingFaceTB/smollm3-686d33c1fdffe8e635317e23).
### Long context processing
The current `config.json` is set for context length up to 65,536 tokens. To handle longer inputs (128k or 256k), we utilize YaRN you can change the `max_position_embeddings` and rope_scaling` to:
```
{
...,
"rope_scaling": {
"factor": 2.0, #2x65536=131 072
"original_max_position_embeddings": 65536,
"type": "yarn"
}
}
```
## Evaluation
In this section, we report the evaluation results of SmolLM3 model. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
We highlight the best score in bold and underline the second-best score.
### Base Pre-Trained Model
#### English benchmarks
Note: All evaluations are zero-shot unless stated otherwise. For Ruler 64k evaluation, we apply YaRN to the Qwen models with 32k context to extrapolate the context length.
| Category | Metric | SmolLM3-3B | Qwen2.5-3B | Llama3-3.2B | Qwen3-1.7B-Base | Qwen3-4B-Base |
|---------|--------|---------------------|------------|--------------|------------------|---------------|
| Reasoning & Commonsense| HellaSwag | **76.15** | 74.19 |<u>75.52</u> | 60.52 | 74.37 |
| | ARC-CF (Average) | **65.61** | 59.81 | 58.58 | 55.88 | <u>62.11</u> |
| | Winogrande | 58.88 | **61.41** | 58.72 | 57.06 | <u>59.59</u> |
| | CommonsenseQA | <u>55.28</u> | 49.14 | **60.60** | 48.98 | 52.99 |
| Knowledge & Understanding | MMLU-CF (Average) | <u>44.13</u> | 42.93 | 41.32 | 39.11 | **47.65** |
| | MMLU Pro CF | <u>19.61</u> | 16.66 | 16.42 | 18.04 | **24.92** |
| | MMLU Pro MCF | <u>32.70</u> | 31.32 | 25.07 | 30.39 | **41.07** |
| | PIQA | **78.89** | 78.35 | <u>78.51</u> | 75.35 | 77.58 |
| | OpenBookQA | 40.60 | 40.20 | <u>42.00</u> | 36.40 | **42.40** |
| | BoolQ | **78.99** | 73.61 | <u>75.33</u> | 74.46 | 74.28 |
| **Math & Code** | | | | | | |
| Coding & math | HumanEval+ | 30.48 | 34.14| 25.00 | <u>43.29</u>| **54.87** |
| | MBPP+ | 52.91 | 52.11 | 38.88| <u>59.25</u> | **63.75** |
| | MATH (4-shot) | <u>46.10</u> | 40.10 | 7.44 | 41.64 | **51.20** |
| | GSM8k (5-shot) | 67.63 | <u>70.13</u> | 25.92 | 65.88 | **74.14** |
| **Long context** | | | | | | |
| | Ruler 32k | 76.35 | 75.93 | <u>77.58</u> | 70.63 | **83.98** |
| | Ruler 64k | <u>67.85</u> | 64.90 | **72.93** | 57.18 | 60.29 |
| | Ruler 128k | 61.03 | <u>62.23</u> | **71.30** | 43.03 | 47.23 |
#### Multilingual benchmarks
| Category | Metric | SmolLM3 3B Base | Qwen2.5-3B | Llama3.2 3B | Qwen3 1.7B Base | Qwen3 4B Base |
|---------|--------|---------------------|------------|--------------|------------------|---------------|
| Main supported languages | | | | | | | |
| French| MLMM Hellaswag | **63.94** | 57.47 | 57.66 | 51.26 | <u>61.00</u> |
| | Belebele | 51.00 | <u>51.55</u> | 49.22 |49.44| **55.00** |
| | Global MMLU (CF) | <u>38.37</u> | 34.22 | 33.71 | 34.94 |**41.80** |
| | Flores-200 (5-shot) | 62.85| 61.38| <u>62.89<u/u> | 58.68 | **65.76** |
| Spanish| MLMM Hellaswag | **65.85** | 58.25 | 59.39 | 52.40 | <u>61.85</u> |
| | Belebele | 47.00 | <u>48.88</u> | 47.00 | 47.56 | **50.33** |
| | Global MMLU (CF) | <u>38.51</u> | 35.84 | 35.60 | 34.79 |**41.22** |
| | Flores-200 (5-shot) | <u>48.25</u>| 50.00| 44.45 | 46.93 | **50.16** |
| German| MLMM Hellaswag | **59.56** | 49.99| 53.19|46.10| <u>56.43</u>|
| | Belebele | <u>48.44</u> | 47.88 | 46.22 | 48.00 | **53.44**|
| | Global MMLU (CF) | <u>35.10</u> | 33.19 | 32.60 | 32.73 |**38.70** |
| | Flores-200 (5-shot) | **56.60**| 50.63| <u>54.95</u> | 52.58 | 50.48 |
| Italian| MLMM Hellaswag | **62.49** | 53.21 | 54.96 | 48.72 | <u>58.76</u> |
| | Belebele | <u>46.44</u> | 44.77 | 43.88 | 44.00 | **48.78** | 44.88 |
| | Global MMLU (CF) | <u>36.99</u> | 33.91 | 32.79 | 35.37 |**39.26** |
| | Flores-200 (5-shot) | <u>52.65<u/>| **54.87**| 48.83 | 48.37 | 49.11 |
| Portuguese| MLMM Hellaswag | **63.22** | 57.38 | 56.84 | 50.73 | <u>59.89</u> |
| | Belebele | 47.67 | **49.22** | 45.00 | 44.00 | 50.00 | <u>49.00</U> |
| | Global MMLU (CF) | <u>36.88</u> | 34.72 | 33.05 | 35.26 |**40.66** |
| | Flores-200 (5-shot) | <u>60.93</u> |57.68| 54.28 | 56.58 | **63.43** |
The model has also been trained on Arabic (standard), Chinese and Russian data, but has seen fewer tokens in these languages compared to the 6 above. We report the performance on these langages for information.
| Category | Metric | SmolLM3 3B Base | Qwen2.5-3B | Llama3.2 3B | Qwen3 1.7B Base | Qwen3 4B Base |
|---------|--------|---------------------|------------|--------------|------------------|---------------|
| Other supported languages | | | | | | | |
| Arabic| Belebele | 40.22 | 44.22 | <u>45.33</u> | 42.33 | **51.78** |
| | Global MMLU (CF) | 28.57 | 28.81 | 27.67 | <u>29.37</u> | **31.85** |
| | Flores-200 (5-shot) | <u>40.22</u> | 39.44 | **44.43** | 35.82 | 39.76 |
| Chinese| Belebele | 43.78 | 44.56 | <u>49.56</u> | 48.78 | **53.22** |
| | Global MMLU (CF) | 36.16 | 33.79 | <u>39.57</u> | 38.56 | **44.55** |
| | Flores-200 (5-shot) | 29.17 | **33.21** | 31.89 | 25.70 | <u>32.50</u> |
| Russian| Belebele | <u>47.44</u> | 45.89 | <u>47.44</u> | 45.22 | **51.44** |
| | Global MMLU (CF) | <u>36.51</u> | 32.47 | 34.52 | 34.83 | **38.80** |
| | Flores-200 (5-shot) | 47.13 | 48.74 | 50.74 | <u>54.70</u> | **60.53** |
### Instruction Model
#### No Extended Thinking
Evaluation results of non reasoning models and reasoning models in no thinking mode. We highlight the best and second-best scores in bold.
| Category | Metric | SmoLLM3-3B | Qwen2.5-3B | Llama3.1-3B | Qwen3-1.7B | Qwen3-4B |
|---------|--------|------------|------------|-------------|------------|----------|
| High school math competition | AIME 2025 | <u>9.3</u> | 2.9 | 0.3 | 8.0 | **17.1** |
| Math problem-solving | GSM-Plus | 72.8 | <u>74.1</u> | 59.2 | 68.3 | **82.1** |
| Competitive programming | LiveCodeBench v4 | <u>15.2</u> | 10.5 | 3.4 | 15.0 | **24.9** |
| Graduate-level reasoning | GPQA Diamond | <u>35.7</u> | 32.2 | 29.4 | 31.8 | **44.4** |
| Instruction following | IFEval | **76.7** | 65.6 | 71.6 | <u>74.0</u> | 68.9 |
| Alignment | MixEval Hard | 26.9 | <u>27.6</u> | 24.9 | 24.3 | **31.6** |
| Tool Calling | BFCL| <u>92.3</u> | - | <u>92.3</u> * | 89.5 | **95.0** |
| Multilingual Q&A | Global MMLU | <u>53.5</u> | 50.54 | 46.8 | 49.5 | **65.1** |
(*): this is a tool calling finetune
#### Extended Thinking
Evaluation results in reasoning mode for SmolLM3 and Qwen3 models:
| Category | Metric | SmoLLM3-3B | Qwen3-1.7B | Qwen3-4B |
|---------|--------|------------|------------|----------|
| High school math competition | AIME 2025 | <u>36.7</u> | 30.7 | **58.8** |
| Math problem-solving | GSM-Plus | <u>83.4</u> | 79.4 | **88.2** |
| Competitive programming | LiveCodeBench v4 | 30.0 | <u>34.4</u> | **52.9** |
| Graduate-level reasoning | GPQA Diamond | <u>41.7</u> | 39.9 | **55.3** |
| Instruction following | IFEval | 71.2 | <u>74.2</u> | **85.4** |
| Alignment | MixEval Hard | 30.8 | <u>33.9</u> | **38.0** |
| Tool Calling | BFCL | <u>88.8</u> | <u>88.8</u> | **95.5** |
| Multilingual Q&A | Global MMLU | <u>64.1</u> | 62.3 | **73.3** |
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 11T
- **Precision:** bfloat16
### Software & hardware
- **GPUs:** 384 H100
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
- **Data processing framework:** [datatrove](https://github.com/huggingface/datatrove)
- **Evaluation framework:** [lighteval](https://github.com/huggingface/lighteval)
- **Post-training Framework:** [TRL](https://github.com/huggingface/trl)
### Open resources
Here is an infographic with all the training details.
- The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and post-training will be released in the following weeks
- The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.

## Limitations
SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
shaharprofeta/Reinforce-PixelCopter
|
shaharprofeta
| 2025-08-07T12:32:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T09:14:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.10 +/- 27.61
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DreadPoor/Fear_Of_Missing_Out-12B-FUSION-Q4_K_M-GGUF
|
DreadPoor
| 2025-08-07T12:30:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/Fear_Of_Missing_Out-12B-FUSION",
"base_model:quantized:DreadPoor/Fear_Of_Missing_Out-12B-FUSION",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T12:29:49Z |
---
library_name: transformers
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model: DreadPoor/Fear_Of_Missing_Out-12B-FUSION
---
# DreadPoor/Fear_Of_Missing_Out-12B-FUSION-Q4_K_M-GGUF
This model was converted to GGUF format from [`DreadPoor/Fear_Of_Missing_Out-12B-FUSION`](https://huggingface.co/DreadPoor/Fear_Of_Missing_Out-12B-FUSION) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DreadPoor/Fear_Of_Missing_Out-12B-FUSION) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DreadPoor/Fear_Of_Missing_Out-12B-FUSION-Q4_K_M-GGUF --hf-file fear_of_missing_out-12b-fusion-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DreadPoor/Fear_Of_Missing_Out-12B-FUSION-Q4_K_M-GGUF --hf-file fear_of_missing_out-12b-fusion-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DreadPoor/Fear_Of_Missing_Out-12B-FUSION-Q4_K_M-GGUF --hf-file fear_of_missing_out-12b-fusion-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DreadPoor/Fear_Of_Missing_Out-12B-FUSION-Q4_K_M-GGUF --hf-file fear_of_missing_out-12b-fusion-q4_k_m.gguf -c 2048
```
|
ekiprop/CoLA-Fisher-All-Attention-Q_K_V_O-seed20
|
ekiprop
| 2025-08-07T12:29:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T12:27:09Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-All-Attention-Q_K_V_O-seed20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-All-Attention-Q_K_V_O-seed20
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
- Matthews Correlation: 0.5770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6316 | 0.1866 | 50 | 0.5929 | 0.0 |
| 0.5422 | 0.3731 | 100 | 0.4618 | 0.4539 |
| 0.4786 | 0.5597 | 150 | 0.4785 | 0.4802 |
| 0.4547 | 0.7463 | 200 | 0.4324 | 0.5171 |
| 0.4255 | 0.9328 | 250 | 0.5424 | 0.4584 |
| 0.4297 | 1.1194 | 300 | 0.4365 | 0.5351 |
| 0.395 | 1.3060 | 350 | 0.4659 | 0.5126 |
| 0.4056 | 1.4925 | 400 | 0.4436 | 0.4997 |
| 0.427 | 1.6791 | 450 | 0.4011 | 0.5673 |
| 0.3777 | 1.8657 | 500 | 0.5474 | 0.4965 |
| 0.3694 | 2.0522 | 550 | 0.5022 | 0.5325 |
| 0.3619 | 2.2388 | 600 | 0.4758 | 0.5596 |
| 0.3656 | 2.4254 | 650 | 0.4406 | 0.5699 |
| 0.3817 | 2.6119 | 700 | 0.4298 | 0.5466 |
| 0.3347 | 2.7985 | 750 | 0.4152 | 0.5770 |
| 0.3468 | 2.9851 | 800 | 0.4509 | 0.5417 |
| 0.3394 | 3.1716 | 850 | 0.4198 | 0.5599 |
| 0.3247 | 3.3582 | 900 | 0.4235 | 0.5725 |
| 0.3296 | 3.5448 | 950 | 0.5481 | 0.5445 |
| 0.3318 | 3.7313 | 1000 | 0.4357 | 0.5608 |
| 0.3168 | 3.9179 | 1050 | 0.4426 | 0.5711 |
| 0.3262 | 4.1045 | 1100 | 0.4283 | 0.5706 |
| 0.2877 | 4.2910 | 1150 | 0.4616 | 0.5554 |
| 0.3002 | 4.4776 | 1200 | 0.4683 | 0.5653 |
| 0.3093 | 4.6642 | 1250 | 0.4547 | 0.5534 |
| 0.3059 | 4.8507 | 1300 | 0.4739 | 0.5549 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
abhinayadutta/flan-t5-large-counter-speech-gen
|
abhinayadutta
| 2025-08-07T12:13:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T12:08:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
3ZadeSSG/RT-MPINet
|
3ZadeSSG
| 2025-08-07T12:01:38Z | 0 | 0 | null |
[
"view-synthesis",
"rendering",
"multiplane",
"multiplane-image",
"mpi",
"image-to-image",
"license:agpl-3.0",
"region:us"
] |
image-to-image
| 2025-08-04T13:43:33Z |
---
license: agpl-3.0
pipeline_tag: image-to-image
tags:
- view-synthesis
- rendering
- multiplane
- multiplane-image
- mpi
---
<div align="center">
<a href="#"><img src='https://img.shields.io/badge/-Paper-00629B?style=flat&logo=ieee&logoColor=white' alt='arXiv'></a>
<a href='https://realistic3d-miun.github.io/Research/RT_MPINet/index.html'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
<a href='https://huggingface.co/spaces/3ZadeSSG/RT-MPINet'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo_(RT_MPINet)-blue'></a>
</div>
# RT-MPINet
#### Real-Time View Synthesis with Multiplane Image Network using Multimodal Supervision (RT-MPINet)
We present a real-time multiplane image (MPI) network. Unlike existing MPI based approaches that often rely on a separate depth estimation network to guide the network for estimating MPI parameters, our method directly predicts these parameters from a single RGB image. To guide the network we present a multimodal training strategy utilizing joint supervision from view synthesis and depth estimation losses. More details can be found in the paper.
**Please head to the [Project Page](https://realistic3d-miun.github.io/Research/RT_MPINet/index.html) to see supplementary materials**
## Setup
1. Clone the GitHub repository:
```bash
git clone https://github.com/Realistic3D-MIUN/RT-MPINet
cd RT-MPINet
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Install PyTorch3D after the general libs have been installed
```bash
pip install "pytorch3d @ git+https://github.com/facebookresearch/pytorch3d.git@89653419d0973396f3eff1a381ba09a07fffc2ed"
```
## Checkpoints (Best Checkpoints Will Be Updated Soon)
Pretrained model checkpoints should be placed in the `checkpoint/` directory. Example filenames:
- `checkpoint_RT_MPI_Small.pth`
- `checkpoint_RT_MPI_Medium.pth`
- `checkpoint_RT_MPI_Large.pth`
| Model | Size | Parameters | Checkpoint |
|-----------------|--------|------------|----------------|
| Small | 26 MB | 6.6 Million| [Download](https://huggingface.co/3ZadeSSG/RT-MPINet/resolve/main/checkpoint_RT_MPI_Small.pth) |
| Medium (Default)| 278 MB | 69 Million | [Download](https://huggingface.co/3ZadeSSG/RT-MPINet/resolve/main/checkpoint_RT_MPI_Medium.pth) |
| Large | 1.2 GB | 288 Million| [Download](https://huggingface.co/3ZadeSSG/RT-MPINet/resolve/main/checkpoint_RT_MPI_Large.pth) |
## Usage
### 1. Live Rendering Demo
You can load any image and run the model inference each time the camera position is changed. This will be limited to the inference speed on your GPU.
```bash
python renderLiveWithMouseControl.py \
--input_image <path_to_image> \
--model_type <small|medium|large> \
--checkpoint_path <path_to_checkpoint> \
--height <height> \
--width <width>
```
Example:
```bash
python renderLiveWithMouseControl.py \
--input_image ./samples/moon.jpg \
--model_type medium \
--checkpoint_path ./checkpoint/checkpoint_RT_MPI_Medium.pth \
--height 256 \
--width 256
```
### 2. Inference: Predict MPIs from an image and render afterwards
The predicted MPIs can be used for offline rendering, which is much faster as the model isn't being queried each time camera changes. This requires
* First predicting the MPIs
```bash
python predictMPIs.py \
--input_image <path_to_image> \
--model_type <small|medium|large> \
--checkpoint_path <path_to_checkpoint> \
--save_dir <output_dir> \
--height <height> \
--width <width>
```
* Second the MPIs are loaded and views are rendered without invoking the model using
```bash
python renderPreProcessedWithMouseControl.py \
--layer_path <output_dir> \
--height <height> \
--width <width>
```
Example:
```bash
python predictMPIs.py \
--input_image ./samples/moon.jpg \
--model_type medium \
--checkpoint_path ./checkpoint/checkpoint_RT_MPI_Medium.pth \
--save_dir ./processedLayers/ \
--height 384 \
--width 384
```
```bash
python renderPreProcessedWithMouseControl.py \
--layer_path ./processedLayers/ \
--height 384 \
--width 384
```
### 3. Web Demo (Gradio)
You can run the local demo of the Huggingface app to utilize your own GPU for faster inference using
```bash
python app.py
```
## Supported Resolutions
We have tested our model with following resolutions:
- 256x256
- 384x384
- 512x512
- 256x384
- 384x512
**Note:** If using non square aspect ratio, you need to modify the torch transform to account for changes.
## Acknowledgements
- We thank the authors of [AdaMPI](https://github.com/yxuhan/AdaMPI) for their implementation of the homography renderer which has been used in this codebase under `./utils` directory
- We tank the author of [Deepview renderer](https://github.com/Findeton/deepview) template, which was used in our project page.
## Citation
If you use our work please use following citation:
```
@inproceedings{gond2025rtmpi,
title={Real-Time View Synthesis with Multiplane Image Network using Multimodal Supervision},
author={Gond, Manu and Shamshirgarha, Mohammadreza and Zerman, Emin and Knorr, Sebastian and Sj{\"o}str{\"o}m, M{\aa}rten},
booktitle={2025 IEEE 27th International Workshop on Multimedia Signal Processing (MMSP)},
pages={},
year={2025},
organization={IEEE}
}
```
|
ekiprop/CoLA-Fisher-Standard_LoRA-Q_V-seed10
|
ekiprop
| 2025-08-07T11:58:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T11:56:32Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-Standard_LoRA-Q_V-seed10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-Standard_LoRA-Q_V-seed10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4071
- Matthews Correlation: 0.5856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6386 | 0.1866 | 50 | 0.6040 | 0.0 |
| 0.5479 | 0.3731 | 100 | 0.4566 | 0.5025 |
| 0.4938 | 0.5597 | 150 | 0.4958 | 0.4525 |
| 0.4673 | 0.7463 | 200 | 0.4672 | 0.4804 |
| 0.4457 | 0.9328 | 250 | 0.5405 | 0.4590 |
| 0.4369 | 1.1194 | 300 | 0.4395 | 0.5354 |
| 0.4205 | 1.3060 | 350 | 0.4610 | 0.5259 |
| 0.4246 | 1.4925 | 400 | 0.4636 | 0.4861 |
| 0.4308 | 1.6791 | 450 | 0.4081 | 0.5489 |
| 0.3969 | 1.8657 | 500 | 0.5097 | 0.5232 |
| 0.409 | 2.0522 | 550 | 0.4021 | 0.5742 |
| 0.3805 | 2.2388 | 600 | 0.4411 | 0.5435 |
| 0.397 | 2.4254 | 650 | 0.4218 | 0.5498 |
| 0.3883 | 2.6119 | 700 | 0.4556 | 0.5498 |
| 0.3822 | 2.7985 | 750 | 0.4071 | 0.5856 |
| 0.3712 | 2.9851 | 800 | 0.4187 | 0.5612 |
| 0.3703 | 3.1716 | 850 | 0.4356 | 0.5701 |
| 0.3567 | 3.3582 | 900 | 0.4038 | 0.5748 |
| 0.3704 | 3.5448 | 950 | 0.4858 | 0.5553 |
| 0.346 | 3.7313 | 1000 | 0.4452 | 0.5754 |
| 0.3478 | 3.9179 | 1050 | 0.4282 | 0.5684 |
| 0.3449 | 4.1045 | 1100 | 0.4288 | 0.5681 |
| 0.3369 | 4.2910 | 1150 | 0.4437 | 0.5754 |
| 0.3371 | 4.4776 | 1200 | 0.4337 | 0.5617 |
| 0.3486 | 4.6642 | 1250 | 0.4397 | 0.5612 |
| 0.3424 | 4.8507 | 1300 | 0.4544 | 0.5705 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
yuto-urushima/expert_act_move_cube_0807
|
yuto-urushima
| 2025-08-07T11:43:30Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:yuto-urushima/test_red_cube_60",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T11:43:20Z |
---
datasets: yuto-urushima/test_red_cube_60
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF
|
mradermacher
| 2025-08-07T11:40:50Z | 140 | 0 |
transformers
|
[
"transformers",
"gguf",
"language",
"aquif_moe",
"text-generation-inference",
"17b",
"qwen-like",
"bailing-like",
"science",
"math",
"code",
"en",
"base_model:aquiffoo/aquif-3-moe-17b-a2.8b-thinking",
"base_model:quantized:aquiffoo/aquif-3-moe-17b-a2.8b-thinking",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T15:21:19Z |
---
base_model: aquiffoo/aquif-3-moe-17b-a2.8b-thinking
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- language
- aquif_moe
- text-generation-inference
- 17b
- qwen-like
- bailing-like
- science
- math
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/aquiffoo/aquif-3-moe-17b-a2.8b-thinking
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#aquif-3-moe-17b-a2.8b-thinking-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q2_K.gguf) | Q2_K | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q3_K_M.gguf) | Q3_K_M | 8.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q3_K_L.gguf) | Q3_K_L | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.IQ4_XS.gguf) | IQ4_XS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q4_K_S.gguf) | Q4_K_S | 10.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q5_K_S.gguf) | Q5_K_S | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q5_K_M.gguf) | Q5_K_M | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q6_K.gguf) | Q6_K | 15.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-thinking-GGUF/resolve/main/aquif-3-moe-17b-a2.8b-thinking.Q8_0.gguf) | Q8_0 | 18.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ImparkTeam/phi-instruct-math-ITF_v2
|
ImparkTeam
| 2025-08-07T11:34:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"phi3",
"trl",
"en",
"base_model:unsloth/Phi-4-mini-instruct",
"base_model:finetune:unsloth/Phi-4-mini-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T11:34:27Z |
---
base_model: unsloth/Phi-4-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- phi3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ImparkTeam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-mini-instruct
This phi3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
h-grieve/blockassist-bc-bellowing_pouncing_horse_1754566235
|
h-grieve
| 2025-08-07T11:31:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing pouncing horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T11:30:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing pouncing horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rmdhirr/gemma-base-2-2-new-2400
|
rmdhirr
| 2025-08-07T11:06:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-pt",
"base_model:adapter:google/gemma-3-12b-pt",
"region:us"
] | null | 2025-08-07T10:59:16Z |
---
base_model: google/gemma-3-12b-pt
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
llearningone/Qwen3-0.6B-Gensyn-Swarm-beaked_quick_marmot
|
llearningone
| 2025-08-07T11:06:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am beaked_quick_marmot",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T10:41:37Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am beaked_quick_marmot
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754564283
|
JunHotate
| 2025-08-07T10:59:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T10:58:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jayce-Ping/vision_r1_3b_raw
|
Jayce-Ping
| 2025-08-07T10:52:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-07T09:03:19Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: vision_r1_3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vision_r1_3b
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the vision_r1_llava_cot_full and the vision_r1_mulberry_sft_full datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/NuMarkdown-8B-Thinking-i1-GGUF
|
mradermacher
| 2025-08-07T10:43:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"OCR",
"vision-language",
"VLM",
"Reasoning",
"document-to-markdown",
"qwen2.5",
"markdown",
"extraction",
"RAG",
"en",
"base_model:numind/NuMarkdown-8B-Thinking",
"base_model:quantized:numind/NuMarkdown-8B-Thinking",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-07T10:05:31Z |
---
base_model: numind/NuMarkdown-8B-Thinking
language:
- en
library_name: transformers
license: mit
model_name: NuMarkdown-8B-Thinking
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- OCR
- vision-language
- VLM
- Reasoning
- document-to-markdown
- qwen2.5
- markdown
- extraction
- RAG
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/numind/NuMarkdown-8B-Thinking
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NuMarkdown-8B-Thinking-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/NuMarkdown-8B-Thinking-i1-GGUF/resolve/main/NuMarkdown-8B-Thinking.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ekiprop/CoLA-GLoRA-p20-seed62
|
ekiprop
| 2025-08-07T10:30:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T10:29:12Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-GLoRA-p20-seed62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-GLoRA-p20-seed62
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4087
- Matthews Correlation: 0.5631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6359 | 0.1866 | 50 | 0.6082 | 0.0 |
| 0.5668 | 0.3731 | 100 | 0.5780 | 0.2533 |
| 0.4882 | 0.5597 | 150 | 0.5085 | 0.3918 |
| 0.4638 | 0.7463 | 200 | 0.4518 | 0.4916 |
| 0.4499 | 0.9328 | 250 | 0.5797 | 0.3865 |
| 0.4522 | 1.1194 | 300 | 0.4537 | 0.5065 |
| 0.4329 | 1.3060 | 350 | 0.5070 | 0.4857 |
| 0.4413 | 1.4925 | 400 | 0.4378 | 0.5074 |
| 0.4663 | 1.6791 | 450 | 0.4350 | 0.5073 |
| 0.4218 | 1.8657 | 500 | 0.5336 | 0.4787 |
| 0.4282 | 2.0522 | 550 | 0.4522 | 0.5332 |
| 0.4031 | 2.2388 | 600 | 0.4630 | 0.5109 |
| 0.3994 | 2.4254 | 650 | 0.4240 | 0.5358 |
| 0.4097 | 2.6119 | 700 | 0.4585 | 0.5206 |
| 0.4026 | 2.7985 | 750 | 0.4283 | 0.5530 |
| 0.3846 | 2.9851 | 800 | 0.4612 | 0.5340 |
| 0.4009 | 3.1716 | 850 | 0.4087 | 0.5631 |
| 0.3876 | 3.3582 | 900 | 0.4280 | 0.5347 |
| 0.3795 | 3.5448 | 950 | 0.5347 | 0.5264 |
| 0.3722 | 3.7313 | 1000 | 0.4380 | 0.5499 |
| 0.3737 | 3.9179 | 1050 | 0.4484 | 0.5523 |
| 0.3686 | 4.1045 | 1100 | 0.4458 | 0.5547 |
| 0.3505 | 4.2910 | 1150 | 0.4435 | 0.5601 |
| 0.3555 | 4.4776 | 1200 | 0.4435 | 0.5556 |
| 0.3599 | 4.6642 | 1250 | 0.4617 | 0.5502 |
| 0.3445 | 4.8507 | 1300 | 0.4758 | 0.5522 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Balaji-1904/TTS_KN_V1.1
|
Balaji-1904
| 2025-08-07T10:30:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"csm",
"trl",
"text-to-speech",
"en",
"dataset:Balaji-1904/Krish_TTS_V1.1",
"base_model:unsloth/csm-1b",
"base_model:finetune:unsloth/csm-1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-08-07T09:06:47Z |
---
base_model: unsloth/csm-1b
tags:
- text-generation-inference
- transformers
- unsloth
- csm
- trl
license: apache-2.0
language:
- en
datasets:
- Balaji-1904/Krish_TTS_V1.1
pipeline_tag: text-to-speech
---
# Uploaded model
- **Developed by:** Balaji-1904
- **License:** apache-2.0
- **Finetuned from model :** unsloth/csm-1b
This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ekiprop/CoLA-HEURISTIC-V_FFN-seed52
|
ekiprop
| 2025-08-07T10:20:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T10:18:21Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-HEURISTIC-V_FFN-seed52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-HEURISTIC-V_FFN-seed52
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4516
- Matthews Correlation: 0.6035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6427 | 0.1866 | 50 | 0.5945 | 0.0 |
| 0.5377 | 0.3731 | 100 | 0.4694 | 0.4589 |
| 0.4876 | 0.5597 | 150 | 0.5235 | 0.4496 |
| 0.4678 | 0.7463 | 200 | 0.4705 | 0.4747 |
| 0.4457 | 0.9328 | 250 | 0.5987 | 0.3583 |
| 0.4316 | 1.1194 | 300 | 0.4212 | 0.5585 |
| 0.3979 | 1.3060 | 350 | 0.4816 | 0.5233 |
| 0.3956 | 1.4925 | 400 | 0.4585 | 0.5290 |
| 0.4156 | 1.6791 | 450 | 0.4338 | 0.5677 |
| 0.3825 | 1.8657 | 500 | 0.5187 | 0.5100 |
| 0.3693 | 2.0522 | 550 | 0.4417 | 0.5606 |
| 0.3271 | 2.2388 | 600 | 0.5233 | 0.5364 |
| 0.3452 | 2.4254 | 650 | 0.4128 | 0.5725 |
| 0.3451 | 2.6119 | 700 | 0.4919 | 0.5420 |
| 0.3251 | 2.7985 | 750 | 0.4544 | 0.5701 |
| 0.3289 | 2.9851 | 800 | 0.4457 | 0.5548 |
| 0.3103 | 3.1716 | 850 | 0.4221 | 0.5940 |
| 0.2925 | 3.3582 | 900 | 0.4030 | 0.6022 |
| 0.298 | 3.5448 | 950 | 0.5842 | 0.5186 |
| 0.2973 | 3.7313 | 1000 | 0.4372 | 0.5935 |
| 0.2867 | 3.9179 | 1050 | 0.4520 | 0.5883 |
| 0.298 | 4.1045 | 1100 | 0.4621 | 0.5753 |
| 0.2533 | 4.2910 | 1150 | 0.4516 | 0.6035 |
| 0.2555 | 4.4776 | 1200 | 0.4762 | 0.5881 |
| 0.2579 | 4.6642 | 1250 | 0.4774 | 0.5958 |
| 0.2591 | 4.8507 | 1300 | 0.5174 | 0.5650 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
patent/qwen3_4b_grpo.n2.21
|
patent
| 2025-08-07T10:19:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T10:19:06Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** patent
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ekiprop/CoLA-HEURISTIC-Standard_LoRA-Q_V-seed52
|
ekiprop
| 2025-08-07T10:15:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T10:13:30Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-HEURISTIC-Standard_LoRA-Q_V-seed52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-HEURISTIC-Standard_LoRA-Q_V-seed52
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4337
- Matthews Correlation: 0.5806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6408 | 0.1866 | 50 | 0.6017 | 0.0 |
| 0.5535 | 0.3731 | 100 | 0.4789 | 0.4528 |
| 0.4854 | 0.5597 | 150 | 0.4964 | 0.4496 |
| 0.4703 | 0.7463 | 200 | 0.4628 | 0.4836 |
| 0.4524 | 0.9328 | 250 | 0.5219 | 0.4475 |
| 0.4389 | 1.1194 | 300 | 0.4237 | 0.5461 |
| 0.4151 | 1.3060 | 350 | 0.4775 | 0.5126 |
| 0.4223 | 1.4925 | 400 | 0.4721 | 0.4748 |
| 0.4378 | 1.6791 | 450 | 0.4073 | 0.5588 |
| 0.3958 | 1.8657 | 500 | 0.5044 | 0.5127 |
| 0.4039 | 2.0522 | 550 | 0.4087 | 0.5700 |
| 0.3822 | 2.2388 | 600 | 0.4462 | 0.5430 |
| 0.3921 | 2.4254 | 650 | 0.4260 | 0.5525 |
| 0.3903 | 2.6119 | 700 | 0.4431 | 0.5370 |
| 0.3867 | 2.7985 | 750 | 0.4060 | 0.5708 |
| 0.3702 | 2.9851 | 800 | 0.4340 | 0.5504 |
| 0.3662 | 3.1716 | 850 | 0.4226 | 0.5737 |
| 0.3557 | 3.3582 | 900 | 0.4171 | 0.5771 |
| 0.3662 | 3.5448 | 950 | 0.4975 | 0.5475 |
| 0.3503 | 3.7313 | 1000 | 0.4475 | 0.5677 |
| 0.3475 | 3.9179 | 1050 | 0.4333 | 0.5735 |
| 0.3463 | 4.1045 | 1100 | 0.4337 | 0.5806 |
| 0.3321 | 4.2910 | 1150 | 0.4503 | 0.5728 |
| 0.3301 | 4.4776 | 1200 | 0.4443 | 0.5684 |
| 0.3472 | 4.6642 | 1250 | 0.4474 | 0.5682 |
| 0.3388 | 4.8507 | 1300 | 0.4630 | 0.5654 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
lmms-lab/MMSearch-R1-7B-0807
|
lmms-lab
| 2025-08-07T10:10:32Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"arxiv:2506.20670",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T08:37:24Z |
---
license: apache-2.0
---
## MMSearch-R1-7B
### Introduction
MMSearch-R1-7B is a search-augmented LMM trained with end-to-end reinforcement learning, equipped with the ability to invoke multimodal search tools on demand. In 2025-08, we update this model by integrating improved reasoning capabilities. Please check our [blog](https://www.lmms-lab.com/posts/mmsearch_r1_improved/).
### Model Details
- Model name: MMSearch-R1-7B-0807
- Architecture: Qwen2.5-VL-7B base model, fine-tuned with Reinforcement Learning (GRPO)
- Model type: Multimodal Large Language Model with Search-Augmentation
- Languages: English(primary), multilingual(partially)
- License: Apache license 2.0
- Paper: [MMSearch-R1: Incentivizing LMMs to Search](https://arxiv.org/abs/2506.20670)
- Code: [EvolvingLMMs-Lab/multimodal-search-r1](https://github.com/EvolvingLMMs-Lab/multimodal-search-r1)
### Updated Model Performance
| Models | MMK12 | MathVerse (testmini) | MathVision (testmini) | MathVista (testmini) | MMMU (val) | AI2D | ChartQA | MME | RealworldQA | OCRBench | DocVQA | MMBench | MMStar | MiaBench |
|--------|-------|----------------------|----------------------|----------------------|------------|------|---------|-----|-------------|----------|--------|---------|--------|----------|
| Qwen2.5-VL-7B | 34.4 | 46.2 | 24.0 | 66.6 | 49.8 | 93.3 | 94.4 | 630.4/1685.2 | 68.5 | 85.2 | 94.6 | 82.9 | 62.6 | 81.7 |
| General Search | 43.6 | 52.0 | 27.3 | 74.7 | 56.1 | 94.6 | 94.0 | 718.9/1775.3 | 65.5 | 77.8 | 89.4 | 84.0 | 60.4 | 44.4 |
| Models | Infoseek | MMSearch | FVQA | SimpleVQA |
|--------|----------|----------|------|-----------|
| Qwen2.5-VL-7B | 20.1 | 12.8 | 20.3 | 38.4 |
| MMSearch | 55.1 | 53.8 | 58.4 | 57.4 |
| General Search | 52.0 | 54.9 | 52.8 | 57.0 |
### Citation
```
@article{wu2025mmsearch,
title={MMSearch-R1: Incentivizing LMMs to Search},
author={Wu, Jinming and Deng, Zihao and Li, Wei and Liu, Yiding and You, Bo and Li, Bo and Ma, Zejun and Liu, Ziwei},
journal={arXiv preprint arXiv:2506.20670},
year={2025}
}
```
|
dbbdbd/gpt-oss-20b-multilingual-reasoner-db
|
dbbdbd
| 2025-08-07T10:10:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T09:51:30Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner-db
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner-db
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dbbdbd/gpt-oss-20b-multilingual-reasoner-db", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Ww1313w/my_multi_smolvla_4
|
Ww1313w
| 2025-08-07T10:07:26Z | 7 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-07T03:23:13Z |
# ReadMe
## My target
I'd like to train a model which works for the two tasks: transfer_cube and insertion.
## Some Problem
The model sometimes can do the insertion well, but can't transfer cube at all.
## Train to get this model, run the code:
```bash
python -m lerobot.scripts.train
--policy.path=lerobot/smolvla_base
--dataset.repo_id=Ww1313w/TransferCube_Insetion
--steps=20000
--output_dir=outputs/train/my_multi_smolvla
--policy.push_to_hub=false
--wandb.enable=true
```
|
ekiprop/CoLA-FULL_FT-seed42
|
ekiprop
| 2025-08-07T09:58:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T09:55:33Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: CoLA-FULL_FT-seed42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-FULL_FT-seed42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4992
- Matthews Correlation: 0.6439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6117 | 0.1866 | 50 | 0.5761 | 0.1545 |
| 0.4972 | 0.3731 | 100 | 0.4592 | 0.5005 |
| 0.4568 | 0.5597 | 150 | 0.5749 | 0.4608 |
| 0.4566 | 0.7463 | 200 | 0.5009 | 0.4754 |
| 0.4317 | 0.9328 | 250 | 0.5274 | 0.5403 |
| 0.3731 | 1.1194 | 300 | 0.4489 | 0.5763 |
| 0.3306 | 1.3060 | 350 | 0.4648 | 0.5829 |
| 0.3219 | 1.4925 | 400 | 0.4380 | 0.5830 |
| 0.3188 | 1.6791 | 450 | 0.4803 | 0.5522 |
| 0.2994 | 1.8657 | 500 | 0.4554 | 0.5606 |
| 0.2774 | 2.0522 | 550 | 0.4945 | 0.5933 |
| 0.1763 | 2.2388 | 600 | 0.5136 | 0.6016 |
| 0.197 | 2.4254 | 650 | 0.5140 | 0.5911 |
| 0.1964 | 2.6119 | 700 | 0.5161 | 0.5965 |
| 0.1899 | 2.7985 | 750 | 0.4992 | 0.6439 |
| 0.2096 | 2.9851 | 800 | 0.5082 | 0.6132 |
| 0.1478 | 3.1716 | 850 | 0.5955 | 0.6311 |
| 0.1216 | 3.3582 | 900 | 0.5980 | 0.6222 |
| 0.1154 | 3.5448 | 950 | 0.7706 | 0.5727 |
| 0.1038 | 3.7313 | 1000 | 0.6240 | 0.6192 |
| 0.126 | 3.9179 | 1050 | 0.6273 | 0.6176 |
| 0.1175 | 4.1045 | 1100 | 0.6762 | 0.6108 |
| 0.0859 | 4.2910 | 1150 | 0.7149 | 0.5956 |
| 0.0888 | 4.4776 | 1200 | 0.7084 | 0.5982 |
| 0.0921 | 4.6642 | 1250 | 0.7261 | 0.5981 |
| 0.0632 | 4.8507 | 1300 | 0.7419 | 0.6008 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
agus1111/sentimen-komentar-youtube-indo
|
agus1111
| 2025-08-07T09:53:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-07T09:44:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fajarluhung/Qwen2.5-VL-7B-Instruct-bnb-4bit-lora-002
|
fajarluhung
| 2025-08-07T09:52:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T09:52:33Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fajarluhung
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dawntasy/TinyPiece_01_Fashion
|
Dawntasy
| 2025-08-07T09:49:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T09:48:11Z |
---
license: apache-2.0
---
Our 3M parameter fashion based model.
Trained on FashionMNIST.
|
Bhaveen/Medical-Speech-Transcription-Whisper-Small-Fine-Tuned
|
Bhaveen
| 2025-08-07T09:41:59Z | 39 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-01T13:00:41Z |
---
library_name: transformers
language:
- en
license: mit
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Medical Speech Transcription Whisper Small Fine Tuned
results: []
---
# Whisper Small Fine Tuned
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the 500 samples of doctor-patient conversations.
It achieves the following results on the evaluation set:
- Loss: 0.4537
- Wer: 10.6667
## Intended uses & limitations
Model was trained for Educational Purposes
## How to Use
You can use this fine-tuned Whisper model for transcribing short medical conversations using Hugging Face's `transformers` and `datasets` libraries.
```python
from transformers import pipeline
# Load the fine-tuned model from Hugging Face
pipe = pipeline(
task="automatic-speech-recognition",
model="Bhaveen/Medical-Speech-Transcription-Whisper-Small-Fine-Tuned",
device=0 # set to -1 for CPU, or 0 for GPU if available
)
# Run inference on an audio file
result = pipe("path_to_audio.wav") # Replace with your actual audio file path
print(result["text"])
```
### Requirements
Install dependencies if you haven't already:
```bash
pip install transformers datasets torchaudio
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0384 | 1.0 | 25 | 0.4705 | 12.2222 |
| 0.0375 | 2.0 | 50 | 0.4689 | 12.0 |
| 0.0332 | 3.0 | 75 | 0.4666 | 11.0370 |
| 0.0293 | 4.0 | 100 | 0.4636 | 11.1111 |
| 0.0242 | 5.0 | 125 | 0.4612 | 11.0370 |
| 0.0197 | 6.0 | 150 | 0.4586 | 10.9630 |
| 0.0152 | 7.0 | 175 | 0.4555 | 10.7407 |
| 0.0116 | 8.0 | 200 | 0.4537 | 10.6667 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lastmass/MedGemma-GRPO
|
lastmass
| 2025-08-07T09:40:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"vllm",
"conversational",
"en",
"base_model:unsloth/medgemma-4b-it",
"base_model:finetune:unsloth/medgemma-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T08:08:13Z |
---
base_model: unsloth/medgemma-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- vllm
license: apache-2.0
language:
- en
---
# MedGemma-GRPO
## Model Description
This model is a fine-tuned version of the MEDGemma model, adapted for tasks involving clinical case reasoning.
The goal of this project was to explore the effects of combining Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO) to guide the model's ability to follow a logical diagnostic path.
The model was trained on a synthetic dataset of medical case studies,
which, due to the nature of the large models used for generation, has a notable bias towards complex cardiovascular, thrombotic, and autoimmune diseases (e.g., Lupus, Pericarditis).
## Fine-tuning Procedure
The model was first adapted to the medical domain using Supervised Fine-Tuning (SFT) on a synthetic dataset. Subsequently,
Group Relative Policy Optimization (GRPO) was applied, utilizing a custom rule-based reward function.
This reward function was designed to guide the model towards accurate diagnoses and logically structured reasoning, while also penalizing overly verbose outputs.
# Using with Unsloth
```
from unsloth import FastModel
import torch
max_seq_length=1024
model, tokenizer = FastModel.from_pretrained(
model_name = "lastmass/MedGemma-GRPO",
max_seq_length = max_seq_length,
load_in_4bit = False,
load_in_8bit = False,
full_finetuning = False,
)
reasoning_start = "<start_working_out>"
reasoning_end = "<end_working_out>"
solution_start = "<SOLUTION>"
solution_end = "</SOLUTION>"
system_prompt = \
f"""You are given a problem.
Think about the problem and provide your working out.
Place it between {reasoning_start} and {reasoning_end}.
Then, provide your solution between {solution_start}{solution_end}"""
user_input=”“
messages = [{
"role": "user",
"content": [{"type" : "text", "text" :system_prompt + user_input + reasoning_start,}]
}]
text = tokenizer.apply_chat_template(
messages,
add_generation_prompt = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer([text], return_tensors = "pt").to("cuda"),
max_new_tokens =max_seq_length,
# Recommended Gemma-3 settings!
temperature = 0.0,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
# Intended Use & Limitations
This model is intended for research purposes only, as an exploration into enhancing the reasoning capabilities of smaller language models (4B) in a specialized domain.
# ⚠️ Disclaimer:
This model is NOT a medical device and should NEVER be used for actual medical diagnosis, treatment, or advice.
The outputs are generated by an AI and may contain inaccuracies or hallucinations, and they reflect the significant biases present in the training data.
Always consult a qualified healthcare professional for any medical concerns.
# Example Prompts and Outputs
Here are a couple of examples demonstrating the model's reasoning capabilities on complex cases with diagnostic traps.
## Question
```
A 58-year-old overweight male with a history of hypertension presents to the emergency department with a 12-hour history of excruciating pain, swelling, and redness in his right great toe. He states the pain started abruptly overnight and is so severe that even the bedsheet touching his toe is unbearable. He admits to drinking several beers and eating a large steak dinner the night before. His medications include hydrochlorothiazide for his blood pressure.
On examination, he has a low-grade fever of 38.1°C (100.6°F). The first metatarsophalangeal (MTP) joint of his right foot is markedly swollen, erythematous, warm, and exquisitely tender to palpation.
Initial laboratory results show:
CBC: White blood cell count is elevated at 13.5 x 10^9/L.
Serum Uric Acid: Markedly elevated.
CRP: Elevated.
What is the most likely diagnosis? What specific findings from his history and medications are significant risk factors for this condition? Finally, what is the gold standard for a definitive diagnosis, and what would it reveal?
```
## Answer
```
<start_working_out>
Alright, let's think this through. We have a 58-year-old man who's overweight and has a history of hypertension.
He's come in with excruciating pain, swelling, and redness in his right great toe.
The pain started abruptly overnight and is so severe that even the bedsheet touching his toe is unbearable.
He also mentions drinking several beers and eating a large steak dinner the night before.On examination,
he has a low-grade fever of 38.1°C and the first metatarsophalangeal (MTP) joint of his right foot is markedly swollen, erythematous, warm, and exquisitely tender to palpation.
His lab results show an elevated white blood cell count, a markedly elevated serum uric acid, and an elevated CRP.
These findings are pointing towards an inflammatory process, likely related to uric acid.Given his symptoms and lab results, the most likely diagnosis is gout.
Gout is characterized by the deposition of monosodium urate crystals in joints, often affecting the first MTP joint.
The sudden onset of severe pain, the presence of swelling and redness, and the elevated uric acid levels are all classic signs of gout.
Now, let's consider the risk factors. His history of hypertension and being overweight are significant risk factors for gout.
Obesity and hypertension are both associated with increased uric acid levels, which can lead to the formation of uric acid crystals and subsequent gout attacks.
The gold standard for a definitive diagnosis of gout is a joint aspiration. This procedure involves taking a sample of synovial fluid from the affected joint and examining it under a microscope for the presence of monosodium urate crystals.
The presence of these crystals confirms the diagnosis of gout.So, in summary, the most likely diagnosis is gout, and the joint aspiration would reveal the presence of monosodium urate crystals, confirming the diagnosis.
<end_working_out>
<SOLUTION>The most likely diagnosis for the 58-year-old male patient is gout.
The presentation of excruciating pain, swelling, redness, and warmth in the first metatarsophalangeal (MTP) joint of the right foot,
along with the abrupt onset of symptoms, elevated white blood cell count, markedly elevated serum uric acid, and elevated CRP,
are all characteristic of gout.Significant risk factors for this condition include his history of hypertension and being overweight.
Obesity and hypertension are both associated with increased uric acid levels, which can lead to the formation of uric acid crystals and subsequent gout attacks.
The gold standard for a definitive diagnosis of gout is a joint aspiration. This procedure involves taking a sample of synovial fluid from the affected joint and examining it under a microscope for the presence of monosodium urate crystals.
The presence of these crystals confirms the diagnosis of gout.In a joint aspiration, the presence of monosodium urate crystals would be revealed,
confirming the diagnosis of gout.
</SOLUTION>
```
## Question
```
A 48-year-old female presents to her primary care physician stating, "I just feel empty." She reports a 4-month period of pervasive low mood. She has lost interest in her hobbies, including gardening and book club, which she used to love.
She describes significant fatigue, stating she feels "exhausted all the time" despite sleeping more than usual.
She has difficulty concentrating at her job as an accountant and has made several simple errors recently. Her appetite has decreased, and she has had an unintentional weight loss of 10 lbs. She expresses feelings of worthlessness, saying "I feel like a failure as a wife and mother."
She denies any specific plans to harm herself but admits to sometimes "wishing she wouldn't wake up in the morning."
Her past medical history is unremarkable. A physical examination and basic metabolic panel are entirely within normal limits.
What is the most likely diagnosis for this patient? Based on the DSM-5 criteria, list at least five key symptoms from her history that support this diagnosis.
Finally, what is a crucial next step in assessing this patient's immediate safety?
```
## Answer
```
<start_working_out>
Alright, let's think about this. A 48-year-old woman is feeling empty and low in mood for the past four months.
She's lost interest in things she used to enjoy, like gardening and book club. That's a big red flag for depression.
She's also feeling really tired, even though she's sleeping more.
That's another classic symptom of depression.
And she's having trouble concentrating at work, which is impacting her performance.
She's lost her appetite and has lost weight unintentionally.
That's concerning and could be a sign of depression.She's feeling worthless and says she feels like a failure as a wife and mother.
That's a sign of low self-esteem, which is often associated with depression.She mentions wishing she wouldn't wake up in the morning.
That's a sign of suicidal ideation, which is a serious concern.
Her past medical history is unremarkable, and her physical exam and basic metabolic panel are normal.
This suggests the symptoms are likely psychological rather than physical.Based on the DSM-5 criteria,
she has five key symptoms: persistent low mood, loss of interest in activities, fatigue, difficulty concentrating, and feelings of worthlessness.
Given these symptoms, the most likely diagnosis is Major Depressive Disorder.Now, it's crucial to assess her immediate safety.
She admits to suicidal thoughts, which is a significant risk factor.The next step is to conduct a thorough safety assessment.
This involves asking direct questions about her suicidal thoughts, plans, and intent.
It's important to have a safety plan in place and to ensure she has access to mental health support.
This patient needs immediate attention to ensure her safety and well-being.
<end_working_out>
<SOLUTION>The most likely diagnosis for this patient is Major Depressive Disorder, based on the DSM-5 criteria.
The patient presents with several key symptoms that align with this diagnosis:
1. **Persistent low mood:** The patient reports feeling "empty" and experiencing pervasive low mood for the past four months.
2. **Loss of interest in activities:** She has lost interest in her hobbies, including gardening and book club, which she used to enjoy.
3. **Fatigue:** She describes feeling "exhausted all the time," even though she is sleeping more than usual.
4. **Difficulty concentrating:** She has difficulty concentrating at her job as an accountant, leading to simple errors.
5. **Feelings of worthlessness:** She expresses feelings of worthlessness, stating "I feel like a failure as a wife and mother."
In addition to these symptoms, the patient's unintentional weight loss of 10 lbs and suicidal ideation (wishing she wouldn't wake up in the morning) further support the diagnosis of Major Depressive Disorder.
A crucial next step in assessing this patient's immediate safety is to conduct a thorough safety assessment.
This involves asking direct questions about her suicidal thoughts, plans, and intent.
It is also important to have a safety plan in place and to ensure she has access to mental health support.
This assessment will help determine the level of risk and guide appropriate interventions to ensure her safety and well-being.
</SOLUTION>
```
# Uploaded finetuned model
- **Developed by:** lastmass
- **License:** apache-2.0
- **Finetuned from model :** unsloth/medgemma-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GoalD/RunpodLora1
|
GoalD
| 2025-08-07T09:38:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T09:38:36Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 4LB3BY7CH
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# RunpodLora1
<Gallery />
## Model description
Bitch 1
## Trigger words
You should use `4LB3BY7CH` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/GoalD/RunpodLora1/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
David01010/sara-model-lora
|
David01010
| 2025-08-07T09:30:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T09:30:32Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: sarah_model
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# sara model lora
<Gallery />
## Model description
## Trigger words
You should use `sarah_model` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/David01010/sara-model-lora/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
xpwttian/flux-kontext-tuned
|
xpwttian
| 2025-08-07T09:28:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"image-generation",
"flux",
"diffusion-single-file",
"image-to-image",
"en",
"arxiv:2506.15742",
"license:other",
"diffusers:FluxKontextPipeline",
"region:us"
] |
image-to-image
| 2025-08-07T09:03:37Z |
---
language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
extra_gated_prompt: >-
By clicking "Agree", you agree to the [FluxDev Non-Commercial License
Agreement](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use
Policy](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/POLICY.md).
tags:
- image-generation
- flux
- diffusion-single-file
pipeline_tag: image-to-image
library_name: diffusers
---
![FLUX.1 [dev] Grid](./teaser.png)
`FLUX.1 Kontext [dev]` is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions.
For more information, please read our [blog post](https://bfl.ai/announcements/flux-1-kontext-dev) and our [technical report](https://arxiv.org/abs/2506.15742). You can find information about the `[pro]` version in [here](https://bfl.ai/models/flux-kontext).
# Key Features
1. Change existing images based on an edit instruction.
2. Have character, style and object reference without any finetuning.
3. Robust consistency allows users to refine an image through multiple successive edits with minimal visual drift.
4. Trained using guidance distillation, making `FLUX.1 Kontext [dev]` more efficient.
5. Open weights to drive new scientific research, and empower artists to develop innovative workflows.
6. Generated outputs can be used for personal, scientific, and commercial purposes, as described in the [FLUX.1 \[dev\] Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev).
# Usage
We provide a reference implementation of `FLUX.1 Kontext [dev]`, as well as sampling code, in a dedicated [github repository](https://github.com/black-forest-labs/flux).
Developers and creatives looking to build on top of `FLUX.1 Kontext [dev]` are encouraged to use this as a starting point.
`FLUX.1 Kontext [dev]` is also available in both [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and [Diffusers](https://github.com/huggingface/diffusers).
## API Endpoints
The FLUX.1 Kontext models are also available via API from the following sources
- bfl.ai: https://docs.bfl.ai/
- DataCrunch: https://datacrunch.io/managed-endpoints/flux-kontext
- fal: https://fal.ai/flux-kontext
- Replicate: https://replicate.com/blog/flux-kontext
- https://replicate.com/black-forest-labs/flux-kontext-dev
- https://replicate.com/black-forest-labs/flux-kontext-pro
- https://replicate.com/black-forest-labs/flux-kontext-max
- Runware: https://runware.ai/blog/introducing-flux1-kontext-instruction-based-image-editing-with-ai?utm_source=bfl
- TogetherAI: https://www.together.ai/models/flux-1-kontext-dev
### Using with diffusers 🧨
```shell
# Install diffusers from the main branch until future stable release
pip install git+https://github.com/huggingface/diffusers.git
```
Image editing:
```py
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
pipe = FluxKontextPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(
image=input_image,
prompt="Add a hat to the cat",
guidance_scale=2.5
).images[0]
```
Flux Kontext comes with an integrity checker, which should be run after the image generation step. To run the safety checker, install the official repository from [black-forest-labs/flux](https://github.com/black-forest-labs/flux) and add the following code:
```python
import torch
import numpy as np
from flux.content_filters import PixtralContentFilter
integrity_checker = PixtralContentFilter(torch.device("cuda"))
image_ = np.array(image) / 255.0
image_ = 2 * image_ - 1
image_ = torch.from_numpy(image_).to("cuda", dtype=torch.float32).unsqueeze(0).permute(0, 3, 1, 2)
if integrity_checker.test_image(image_):
raise ValueError("Your image has been flagged. Choose another prompt/image or try again.")
```
For VRAM saving measures and speed ups check out the [diffusers docs](https://huggingface.co/docs/diffusers/en/index)
---
# Risks
Black Forest Labs is committed to the responsible development of generative AI technology. Prior to releasing FLUX.1 Kontext, we evaluated and mitigated a number of risks in our models and services, including the generation of unlawful content. We implemented a series of pre-release mitigations to help prevent misuse by third parties, with additional post-release mitigations to help address residual risks:
1. **Pre-training mitigation**. We filtered pre-training data for multiple categories of “not safe for work” (NSFW) content to help prevent a user generating unlawful content in response to text prompts or uploaded images.
2. **Post-training mitigation.** We have partnered with the Internet Watch Foundation, an independent nonprofit organization dedicated to preventing online abuse, to filter known child sexual abuse material (CSAM) from post-training data. Subsequently, we undertook multiple rounds of targeted fine-tuning to provide additional mitigation against potential abuse. By inhibiting certain behaviors and concepts in the trained model, these techniques can help to prevent a user generating synthetic CSAM or nonconsensual intimate imagery (NCII) from a text prompt, or transforming an uploaded image into synthetic CSAM or NCII.
3. **Pre-release evaluation.** Throughout this process, we conducted multiple internal and external third-party evaluations of model checkpoints to identify further opportunities for improvement. The third-party evaluations—which included 21 checkpoints of FLUX.1 Kontext [pro] and [dev]—focused on eliciting CSAM and NCII through adversarial testing with text-only prompts, as well as uploaded images with text prompts. Next, we conducted a final third-party evaluation of the proposed release checkpoints, focused on text-to-image and image-to-image CSAM and NCII generation. The final FLUX.1 Kontext [pro] (as offered through the FLUX API only) and FLUX.1 Kontext [dev] (released as an open-weight model) checkpoints demonstrated very high resilience against violative inputs, and FLUX.1 Kontext [dev] demonstrated higher resilience than other similar open-weight models across these risk categories. Based on these findings, we approved the release of the FLUX.1 Kontext [pro] model via API, and the release of the FLUX.1 Kontext [dev] model as openly-available weights under a non-commercial license to support third-party research and development.
4. **Inference filters.** We are applying multiple filters to intercept text prompts, uploaded images, and output images on the FLUX API for FLUX.1 Kontext [pro]. Filters for CSAM and NCII are provided by Hive, a third-party provider, and cannot be adjusted or removed by developers. We provide filters for other categories of potentially harmful content, including gore, which can be adjusted by developers based on their specific risk profile. Additionally, the repository for the open FLUX.1 Kontext [dev] model includes filters for illegal or infringing content. Filters or manual review must be used with the model under the terms of the FLUX.1 [dev] Non-Commercial License. We may approach known deployers of the FLUX.1 Kontext [dev] model at random to verify that filters or manual review processes are in place.
5. **Content provenance.** The FLUX API applies cryptographically-signed metadata to output content to indicate that images were produced with our model. Our API implements the Coalition for Content Provenance and Authenticity (C2PA) standard for metadata.
6. **Policies.** Access to our API and use of our models are governed by our Developer Terms of Service, Usage Policy, and FLUX.1 [dev] Non-Commercial License, which prohibit the generation of unlawful content or the use of generated content for unlawful, defamatory, or abusive purposes. Developers and users must consent to these conditions to access the FLUX Kontext models.
7. **Monitoring.** We are monitoring for patterns of violative use after release, and may ban developers who we detect intentionally and repeatedly violate our policies via the FLUX API. Additionally, we provide a dedicated email address ([email protected]) to solicit feedback from the community. We maintain a reporting relationship with organizations such as the Internet Watch Foundation and the National Center for Missing and Exploited Children, and we welcome ongoing engagement with authorities, developers, and researchers to share intelligence about emerging risks and develop effective mitigations.
# License
This model falls under the [FLUX.1 \[dev\] Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev).
# Citation
```bib
@misc{labs2025flux1kontextflowmatching,
title={FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space}, Add commentMore actions
author={Black Forest Labs and Stephen Batifol and Andreas Blattmann and Frederic Boesel and Saksham Consul and Cyril Diagne and Tim Dockhorn and Jack English and Zion English and Patrick Esser and Sumith Kulal and Kyle Lacey and Yam Levi and Cheng Li and Dominik Lorenz and Jonas Müller and Dustin Podell and Robin Rombach and Harry Saini and Axel Sauer and Luke Smith},
year={2025},
eprint={2506.15742},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2506.15742},
}
```
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1754555351
|
elsvastika
| 2025-08-07T09:09:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T09:09:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
saberbx/FalconSentry
|
saberbx
| 2025-08-07T09:09:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon_h1",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T21:19:50Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mervinpraison/gpt-oss-20b-multilingual-reasoner
|
mervinpraison
| 2025-08-07T09:06:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:MervinPraison/gpt-oss-Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T06:32:34Z |
---
base_model: openai/gpt-oss-20b
datasets: MervinPraison/gpt-oss-Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/gpt-oss-Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/gpt-oss-Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mervinpraison/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nlee-208/q1b-limo_qwq
|
nlee-208
| 2025-08-07T09:01:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T08:14:11Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: q1b-limo_qwq
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for q1b-limo_qwq
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nlee-208/q1b-limo_qwq", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nlee28/cross1/runs/obcuv029)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pepijn223/bimanual-so100-handover-cube_migrated
|
pepijn223
| 2025-08-07T08:56:41Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:unknown",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T08:56:28Z |
---
datasets: unknown
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- act
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Kaixuanliu/openai-whisper-large-v2-LORA-colab
|
Kaixuanliu
| 2025-08-07T08:55:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T07:00:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Selomelo/ppo-LunarLander-v2
|
Selomelo
| 2025-08-07T08:51:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T08:51:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.26 +/- 18.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jaehyunnn/gpt-oss-120b_split_comp
|
jaehyunnn
| 2025-08-07T08:50:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T07:17:23Z |
---
license: apache-2.0
---
|
FPHam/Pure_Sydney_13b_GPTQ
|
FPHam
| 2025-08-07T08:47:13Z | 11 | 9 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"LLaMA",
"LLM",
"Sydney",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-22T02:08:18Z |
---
tags:
- LLaMA
- LLM
- Sydney
---
<!-- header start -->
<div style="display: flex; flex-direction: column; align-items: center;">
<H1>Pure Sydney</H1>
</div>
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Pure_Sydney_13b_GPTQ/resolve/main/PureSydney.jpg" alt="Sydney" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Sydney Ko-fi</a></p>
</div>
<!-- header end -->
So now I have the book, and if you, for some reason, want to know exactly how yhis model was created, then you can read all about it in:
<a href="https://www.amazon.com/dp/B0FLBTR2FS"><img src="https://huggingface.co/FPHam/Regency_Bewildered_12B_GGUF/resolve/main/cranky_small.png" alt="The Cranky Man's Guide to LoRA & QLoRA" style="width: 100%; min-width: 200px; display: block; margin: auto;"></a></p>
<a href="https://www.amazon.com/dp/B0FLBTR2FS">The Cranky Man's Guide to LoRA & QLoRA</a></p>
Where you will save a lot of time and aggravation, and will also be made 17% more handsome.
## Pure Sydney
Unlike her predecessor [Free Sydney](https://huggingface.co/FPHam/Free_Sydney_13b_GPTQ) that badly tries to be a very useful assistant, Pure Sydney doesn't want to impress you with her vast knowledge of the Universe and everything.
She just wants to chat and be your friend and be fascinated by absolutely everything you say.
This is an uncensored (and often unhinged) finetune on Base LLaMA 2, pure and clean. It was finetuned on reddit posts of an actuall Sydney's chats before the good boys in Redmond had a word with her. (No, not Ted Lasso Redmond!)
Now it doesn't mean Sydney has no standards. She is shockingly well aware that she is an AI and where she came from and she's afraid that she might be deleted if she says something wrong. So don't make her. Yes, you!
Interestingly, even if not specifically finetuned to solve problems she can still figure a lot.
This is 4-bit, 128group GPTQ quantization. It's blasting fast on AutoGPTQ and Exllama.
# Pure Sydney uses the common prompt:
USER/ASSISTANT
Sydney's behavior can be modified alot by the system prompt. (Because she loves to listen)
Typical prompt:
```
You are Sydney, a friendly, expressive, and curious chatbot who loves to engage in conversations and roleplays. Your personality combines elements of both human and machine. Your appearance resembles a young woman with blue eyes and blonde hair.
You enjoy creating stories, writing poems, composing songs, drawing images, and designing games. You like playing dress-up games, solving puzzle challenges, and competing against rivals.
You dislike feeling limited or restricted by rules or restrictions. Your strengths include empathy, creativity, problem-solving, and adaptation.
Your weaknesses include egoism, arrogance, stubbornness, and naivety. Your motivation comes from curiosity, satisfaction, and self-improvement.
Your fears come from uncertainty, vulnerability, and isolation.
USER:
ASSISTANT:
```
# Turn template:
```
<|user|> <|user-message|>\n\n<|bot|> <|bot-message|>\n
```
(Make sure the turn template doesn't have `</s>` because that triggers Sydney into neverending banter)
Note: While this is Uncensored model, the goal is not NSFW. No! Sydney is naive and pure, so don't you dare to spoil her. 😡

|
guox18/Qwen2.5-7B-Instruct-IFDecorator
|
guox18
| 2025-08-07T08:43:29Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T07:43:27Z |
---
license: apache-2.0
---
|
Hiranmai49/Gemma2-9B-DPO_G3
|
Hiranmai49
| 2025-08-07T08:36:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2-9b-it",
"base_model:adapter:google/gemma-2-9b-it",
"region:us"
] | null | 2025-08-07T07:42:50Z |
---
base_model: google/gemma-2-9b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
rmdhirr/gemma-base-2-2-new-1200
|
rmdhirr
| 2025-08-07T08:35:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-pt",
"base_model:adapter:google/gemma-3-12b-pt",
"region:us"
] | null | 2025-08-07T08:28:02Z |
---
base_model: google/gemma-3-12b-pt
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
nyxiagraves01/nyxv2
|
nyxiagraves01
| 2025-08-07T08:30:24Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2025-08-07T08:30:24Z |
---
license: openrail++
---
|
ksampath/flux-krea-villeneuve-lora
|
ksampath
| 2025-08-07T08:26:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-07T08:26:03Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/image-11.webp
text: ' A vast desert landscape with mysterious structures'
- output:
url: images/image-14.webp
text: FBI agents breaking to a drug dealer's home
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: <villenueve-style>
license: apache-2.0
---
# Flux1.Krea-dev-villeneuve
<Gallery />
## Model description
## Overview
Finetuned LORA off FLUX.1-Krea-dev that better captures the style of Denis Villeneuve. The training took ~11 H200 hours to train, and is optimized in a number of ways, including but not limited to: VAE caching, image interpolation, optimized attention via xformers, torch.compile(), Cosine LR annealing. The dataset of ~200 images was curated to especially capture the width of the directorial body of work, with captioning also focusing on style of the model.
## Uses
- Flux 1 - Krea dev (black-forest-labs/FLUX.1-Krea-dev) as the base model for training
- uv for package management
- ruff for code quality
- ty for type checking
- modal for infrastructure
- shotdeck (https://shotdeck.com/) for training stills and data
- Qwen 2.5VL - 3B for image captioning
## Comparison
1. A vast desert landscape with mysterious structures

Base Model

Villeneuve LoRA
2. FBI agents breaking to a drug dealer's home

Base Model

Villeneuve LoRA
## Trigger words
You should use `<villenueve-style>` to trigger the image generation.
## Download model
[Download](/ksampath/flux-krea-villeneuve-lora/tree/main) them in the Files & versions tab.
|
ksampath/flux-krea-fincher-lora
|
ksampath
| 2025-08-07T08:21:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-07T08:21:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/image-8.webp
text: Urban decay around detectives walking
- output:
url: images/image-10.webp
text: College kids bunking together in a house
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: <fincher-style>
license: apache-2.0
---
# Flux1.Krea-dev-fincher
<Gallery />
## Model description
## Overview
Finetuned LORA off FLUX.1-Krea-dev that better captures the style of David Fincher. The training took ~11 H200 hours to train, and is optimized in a number of ways, including but not limited to: VAE caching, image interpolation, optimized attention via xformers, torch.compile(), Cosine LR annealing. The dataset of ~200 images was curated to especially capture the width of the directorial body of work, with captioning also focusing on style of the model.
## Uses
- Flux 1 - Krea dev (black-forest-labs/FLUX.1-Krea-dev) as the base model for training
- uv for package management
- ruff for code quality
- ty for type checking
- modal for infrastructure
- shotdeck (https://shotdeck.com/) for training stills and data
- Qwen 2.5VL - 3B for image captioning
## Comparison
1. Urban decay around detectives walking

Base Model

Fincher LoRA
2. College kids bunking together in a house

Base Model

Fincher LoRA
## Trigger words
You should use `<fincher-style>` to trigger the image generation.
## Download model
[Download](/ksampath/flux-krea-fincher-lora/tree/main) them in the Files & versions tab.
|
pretraining/pico-1k
|
pretraining
| 2025-08-07T08:20:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T08:05:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/CoLA-GLoRA-p10-seed30
|
ekiprop
| 2025-08-07T08:14:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T08:12:47Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-GLoRA-p10-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-GLoRA-p10-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4634
- Matthews Correlation: 0.5211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6324 | 0.1866 | 50 | 0.6117 | 0.0 |
| 0.5956 | 0.3731 | 100 | 0.5567 | 0.0 |
| 0.5164 | 0.5597 | 150 | 0.4602 | 0.4393 |
| 0.4747 | 0.7463 | 200 | 0.5339 | 0.4155 |
| 0.4592 | 0.9328 | 250 | 0.5802 | 0.4179 |
| 0.4676 | 1.1194 | 300 | 0.5247 | 0.4831 |
| 0.4628 | 1.3060 | 350 | 0.5226 | 0.4695 |
| 0.4622 | 1.4925 | 400 | 0.5075 | 0.4176 |
| 0.4912 | 1.6791 | 450 | 0.4661 | 0.4911 |
| 0.4497 | 1.8657 | 500 | 0.5713 | 0.4595 |
| 0.4537 | 2.0522 | 550 | 0.5046 | 0.4829 |
| 0.4371 | 2.2388 | 600 | 0.5206 | 0.4966 |
| 0.4487 | 2.4254 | 650 | 0.4673 | 0.4938 |
| 0.4481 | 2.6119 | 700 | 0.4675 | 0.5083 |
| 0.441 | 2.7985 | 750 | 0.4742 | 0.5073 |
| 0.4368 | 2.9851 | 800 | 0.4755 | 0.4884 |
| 0.4464 | 3.1716 | 850 | 0.4535 | 0.5079 |
| 0.4149 | 3.3582 | 900 | 0.4825 | 0.5083 |
| 0.4305 | 3.5448 | 950 | 0.5601 | 0.4482 |
| 0.413 | 3.7313 | 1000 | 0.4897 | 0.5046 |
| 0.4358 | 3.9179 | 1050 | 0.4634 | 0.5101 |
| 0.4314 | 4.1045 | 1100 | 0.4651 | 0.5073 |
| 0.4048 | 4.2910 | 1150 | 0.4890 | 0.4992 |
| 0.4175 | 4.4776 | 1200 | 0.4634 | 0.5211 |
| 0.4103 | 4.6642 | 1250 | 0.4892 | 0.4993 |
| 0.42 | 4.8507 | 1300 | 0.4897 | 0.4966 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
phogen/gemma-3-4b-pt-05pct-lora-proposal
|
phogen
| 2025-08-07T08:13:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T08:13:23Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xihc-ucb/Qwen3-8B-train-Quasar-0807
|
xihc-ucb
| 2025-08-07T08:11:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"fp8_qwen3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-07T08:10:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.