modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Fingerling/whisper-large-v3-turbo-zh | Fingerling | 2025-05-27T09:21:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-27T09:21:00Z | ---
license: apache-2.0
---
|
mesolitica/Malaysian-gemma-3-1b-it | mesolitica | 2025-05-27T09:01:37Z | 7 | 0 | null | [
"safetensors",
"gemma3_text",
"ms",
"en",
"zh",
"ta",
"region:us"
] | null | 2025-05-03T12:25:50Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian gemma-3-1b-it
Continue finetuning https://huggingface.co/google/gemma-3-1b-it on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-gemma3-1b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/gemma3
## Benchmark
### MalayMMLU
Based on 0-shot first token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-gemma-3-1b-it 48.096603 0shot True STEM
1 Malaysian-gemma-3-1b-it 47.423664 0shot True Language
2 Malaysian-gemma-3-1b-it 47.210176 0shot True Social science
3 Malaysian-gemma-3-1b-it 47.709283 0shot True Others
4 Malaysian-gemma-3-1b-it 51.786121 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-gemma-3-1b-it
Metric : first
Shot : 0shot
average accuracy 48.27158964192789
accuracy for STEM 48.09660253786328
accuracy for Language 47.4236641221374
accuracy for Social science 47.21017635154669
accuracy for Others 47.70928280163108
accuracy for Humanities 51.786120591581344
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
rtl-llm/qwen2.5coder-7b-origen-all-ordered-len768 | rtl-llm | 2025-05-27T08:56:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-27T08:53:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pfnet/plamo-2-translate-base | pfnet | 2025-05-27T05:33:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"plamo2",
"text-generation",
"plamo",
"translation",
"conversational",
"custom_code",
"en",
"ja",
"base_model:pfnet/plamo-2-8b",
"base_model:finetune:pfnet/plamo-2-8b",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-27T05:29:54Z | ---
license: other
license_name: plamo-community-license
license_link: https://huggingface.co/pfnet/plamo-2-8b/blob/main/LICENSE/ja
language:
- en
- ja
pipeline_tag: text-generation
library_name: transformers
extra_gated_heading: PLaMo community license to download PLaMo 2 8B
extra_gated_description: To download PLaMo 2 8B, you have to agree to our license.
PLaMo 2 8B is released PLaMo community license. For non-commerical use, please contact
us via this [form](https://forms.gle/mTL8tBLrMYXKNZD56).
extra_gated_button_content: agree to PLaMo community license
extra_gated_prompt: "(English version is under construction. We apologize for the\
\ inconvenience.)\n### PLaMoコミュニティライセンス契約\nPLaMoコミュニティライセンス契約には、株式会社Preferred Networksが提供する別途定める大規模言語基盤モデルPLaMo及びその派生物を利用するためのライセンスの内容及びユーザーが遵守する事項等が定められている。ユーザーのPLaMo及びその派生物の利用には本契約が適用され、本契約に同意又は本モデル等を利用することにより、ユーザーは本契約に拘束される。\n\
#### 第1条(定義)\n(1) 「本契約」とは、PLaMoコミュニティライセンス契約を意味する。\n(2) 「PFN」とは、株式会社Preferred Networksを意味する。\n\
(3) 「本モデル」とは、別途定める「PLaMo」という名称のモデルの重み、モデルコード、トークナイザー、学習スクリプト及びこれらに付随してPFNが提供するものを意味する。\n\
(4) 「ユーザー」とは、本モデルを利用する個人又は法人を意味する。\n(5) 「派生モデル」とは、本モデルを改変又は利用し作成されるモデルの重み、モデルコード及びその他作成されたモデルの付随物を意味する。\n\
(6) 「生成物」とは、本モデル又は派生モデルの出力結果を意味する。\n(7) 「本モデル等」とは、本モデル、派生モデル及び生成物の総称を意味する。\n(8)\
\ 「本ライセンス」とは、PFNがユーザーに対して本契約に基づき本モデル等を利用することを許諾することを意味する。\n(9) 「商業目的」とは、 私的使用又は学術用途の範囲を超える、事業での利用又は営利を目的とする利用を意味する。なお、商業目的にはユーザーの製品、サービス又は事業の開発、変更又は提供(ホスティングサービスやAPI経由での提供を含む。)を目的とした使用及びユーザーの組織内部における利用も含まれる。\n\
#### 第2条(ユーザー)\nユーザーは、18歳以上又はその居住国で単独で契約を締結できる年齢に達していなければならない。但し、ユーザーの親権者又は法定代理人が本契約をユーザーが締結することに同意している場合はこの限りではない。\n\
#### 第3条(本ライセンス)\n(1) PFNは、ユーザーが本契約に同意しかつ本契約を遵守することを条件に、ユーザーに対して、本モデル等を本契約に定める条件及び範囲内で利用することを許諾する。\n\
(2) 本ライセンスは非独占、世界的、譲渡不可及びロイヤリティ無料とする。\n(3) ユーザーは、以下の条件をいずれも満たす場合に限り、商業目的を含む形で本モデル等を利用することができる。なお、ユーザーがこれらの条件のいずれかを満たさなくなった場合は、ユーザーはその時点で本モデル等を商業目的で利用することはできず、商業目的で本モデル等を利用したい場合は、新たにPFNから商業用のライセンスを取得しなければならない。\n\
\n (i) PFNの公式登録ページ https://forms.gle/mTL8tBLrMYXKNZD56 に事前に登録すること。\n\n (ii) ユーザー又はその関係会社の直近事業年度の収入又は売上が10億円(ユーザーの現地通貨換算額)を超えないこと。\n\
\n#### 第4条(再配布及び表示義務)\n(1) ユーザーが本モデル等(派生モデルやその生成物を含む)を第三者に提供する場合、以下の条件を満たさなければならない。\n\
\n (i) 本契約のコピーを提供し、本契約の条件を遵守させること。\n\n (ii) 「Built with PLaMo」と明示し、関連ウェブサイト、ユーザーインターフェース、ブログ記事、製品情報ページ又は製品ドキュメントに記載すること。\n\
\n (iii) 本モデル等を利用して作成した AI モデルの名称に「PLaMo」を含めること。\n\n#### 第5条(生成物の利用)\n(1) ユーザーは、生成物を本モデル又は派生モデルの生成物であることを明示することを条件に、公表することができる。\n\
(2) 生成物を利用してモデルを学習した場合、そのモデルは派生モデルとして本契約の条件が適用され、本契約のライセンス条件の下でのみ利用、配布及び商業化することができる。\n\
#### 第6条(その他利用条件)\nユーザーは、本モデル等の利用に関して、以下に定める行為をしてはならない。\n(1) 法令又は公序良俗に違反する行為\n(2)\
\ 犯罪行為又はこれを予告、関与、助長その他これらに関連する行為\n(3) PFN又は第三者の権利又は利益を侵害する行為\n(4) PFN又は第三者の名誉若しくは信用を毀損する行為\n\
(5) 生成物がPFNの公式見解等であるものという錯誤を生む情報を流布する行為\n(6) 虚偽の情報を発信する行為\n(7) 上記の他、PFNが不適切と合理的に判断する行為\n\
#### 第7条(保証の否認)\n(1) 本モデル及び生成物は、「現状有姿」で提供され、PFNは、これらに対して、正確性、真実性、商品性、品質、性能、特定目的への適合性、権利の非侵害など一切の保証をしない。\n\
(2) ユーザーは、法律、医療、金融又は人物評価その他重要な事項の決定に関して、生成物を唯一の証拠、評価又は意見として使用してはならない。\n(3) ユーザーは、本モデル等の使用及びその結果に関して全ての責任を負う。\n\
#### 第8条(責任の制限)\n(1) 契約責任、不法行為又は製造物責任その他の法的責任のいずれかであるかを問わず、PFNが本契約及び本モデル等に関してユーザーに対して負う損害賠償の責任は、通常かつ直接の損害に限り(逸失利益、特別損害、間接損害その他の損害については、その予見可能性の有無に関わらず、責任を負わない。)、損害賠償額の上限は、500円とする。但し、PFNに故意又は重過失が認められる場合はこの限りではない。\n\
(2) 前項に関わらず、ユーザーが本モデル等を事業のために利用する場合は、PFNは本契約及び本モデル等に関してユーザーに対して一切の損害賠償責任及びその他の責任を負わない。\n\
#### 第9条(ユーザーの責任)\n(1) ユーザーは、本モデル等の取得及び利用に関して、適用される法令(輸出入及び貿易に関連する法令を含む。)及び本契約を遵守する。\n\
(2) ユーザーは、本契約違反又は本モデル等の使用によって、PFNに損害を与えた場合は、その損害を賠償する。\n(3) ユーザーの本モデル等の使用に起因して、PFNが第三者から損害賠償請求その他請求を受けた場合、ユーザーは、当該請求からPFNを免責し、PFNに損害を与えないようにする。\n\
#### 第10条(権利の帰属)\n(1) 本モデルの一切の権利は、PFN又はPFNに本モデルのライセンスをしている第三者に帰属する。\n(2) 派生モデルのうち、ユーザーが本モデルを改変した部分の権利はユーザーに帰属し、その他の部分の権利はPFNに帰属する。\n\
(3) 生成物の一切の権利はユーザーに帰属する。\n#### 第11条(契約期間及び終了)\n(1) 本契約は、ユーザーが本契約に同意したとき又は本モデルにアクセスしたときから、本契約が解約されたときまでとする。\n\
(2) ユーザーが本契約のいずれかの条項に違反した場合、PFNは直ちに本契約を解除することができ、ユーザーは本モデル等のすべてのコピーを削除し、利用を即時に停止しなければならない。\n\
#### 第12条(契約の変更)\nPFNは、本契約(本モデル等に関するルールや諸規定等を含む。以下本条において同じ。)を変更できるものとする。PFNは、本契約を変更する場合には、変更の内容及び変更の効力発生時期を、当該効力発生時期までにPFN所定の方法で告知するものとする。\n\
#### 第13条(準拠法及び管轄裁判所)\n(1) 本契約の準拠法は日本法とする。\n(2) 本モデル等及び本契約に起因する紛争については、東京地方裁判所が専属的合意管轄裁判所とする。"
base_model: pfnet/plamo-2-8b
tags:
- plamo
- translation
---
# PLaMo Translation Model
PLaMo翻訳モデルはPreferred Networksによって開発された翻訳向け特化型大規模言語モデルです。
詳しくは[ブログ記事](https://tech.preferred.jp/ja/blog/plamo-translate/)および[プレスリリース](https://www.preferred.jp/ja/news/pr20250527/)を参照してください。
PLaMo Translation Model is a specialized large-scale language model developed by Preferred Networks for translation tasks.
For details, please refer to the [blog post](https://tech.preferred.jp/ja/blog/plamo-translate/) and [press release](https://www.preferred.jp/ja/news/pr20250527/).
List of models:
- [plamo-2-translate](http://huggingface.co/pfnet/plamo-2-translate) ... Post-trained model for translation
- [plamo-2-translate-base](http://huggingface.co/pfnet/plamo-2-translate-base) ... Base model for translation
- [plamo-2-translate-eval](http://huggingface.co/pfnet/plamo-2-translate-eval) ... Pair-wise evaluation model
PLaMo Translation Model is released under PLaMo community license. Please check the following license and agree to this before downloading.
- (EN) under construction: we apologize for the inconvenience
- (JA) https://www.preferred.jp/ja/plamo-community-license/
**NOTE**: This model has **NOT** been instruction-tuned for chat dialog or other downstream tasks.
### For *commercial* users
Please check the PLaMo community license and contact us via the following form to use commercial purpose.
- (EN/JA) https://forms.gle/mTL8tBLrMYXKNZD56
## Usage
### main/base model
```py
import vllm
# max_model_len/max_num_batched_tokens can be increased when running on a GPU with substantial memory.
# NOTE: Switch to "pfnet/plamo-2-translate-base" to try the base model.
llm = vllm.LLM(model="pfnet/plamo-2-translate", trust_remote_code=True, max_model_len=2000, max_num_batched_tokens=2000)
prompt = r'''<|plamo:op|>dataset
translation
<|plamo:op|>input lang=English
Write the text to be translated here.
<|plamo:op|>output lang=Japanese
'''
responses = llm.generate([prompt] * 1, sampling_params=vllm.SamplingParams(temperature=0, max_tokens=1024, stop=["<|plamo:op|>"]))
# NOTE: This outputs "ここに翻訳するテキストを入力してください。".
print(responses[0].outputs[0].text)
```
### evaluation model
```py
import vllm
# max_model_len/max_num_batched_tokens can be increased when running on a GPU with substantial memory.
llm = vllm.LLM(model="pfnet/plamo-2-translate-eval", trust_remote_code=True, max_model_len=2000, max_num_batched_tokens=2000)
prompt = r'''<|plamo:op|>dataset
translation evaluation
<|plamo:op|>input lang=English
This is an apple.
<|plamo:op|>output id=A lang=Japanese
これはりんごです。
<|plamo:op|>output id=B lang=Japanese
これはリンゴです。
<|plamo:op|>best
id='''
responses = llm.generate([prompt] * 1, sampling_params=vllm.SamplingParams(temperature=0, max_tokens=1, stop=["<|plamo:op|>"]))
# NOTE: This outputs "A".
print(responses[0].outputs[0].text)
```
## Bias, Risks, and Limitations
PLaMo Translation Model is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo Translation Model’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo Translation Model, developers should perform safety testing and tuning tailored to their specific applications of the model.
## Acknowledgement
This model is trained under the project, “Research and Development Project of the Enhanced Infrastructures for Post 5G Information and Communication System” (JPNP 20017), subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
## AI policies for Preferred Networks, Inc. group
- (EN) https://www.preferred.jp/en/company/aipolicy/
- (JA) https://www.preferred.jp/ja/company/aipolicy/ |
BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb61seq5028jlexpwa8r7ont | BootesVoid | 2025-05-27T05:31:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-27T05:31:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KYLEE
---
# Cmb3S2E7J07Guu1Cgteid9Ti5_Cmb61Seq5028Jlexpwa8R7Ont
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KYLEE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KYLEE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb61seq5028jlexpwa8r7ont/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb61seq5028jlexpwa8r7ont', weight_name='lora.safetensors')
image = pipeline('KYLEE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb61seq5028jlexpwa8r7ont/discussions) to add images that show off what you’ve made with this LoRA.
|
Intel/Qwen3-30B-A3B-int4-AutoRound-inc | Intel | 2025-05-27T05:07:52Z | 0 | 0 | null | [
"safetensors",
"qwen3_moe",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"4-bit",
"auto-round",
"region:us"
] | null | 2025-05-27T02:07:04Z | ---
license: apache-2.0
datasets:
- NeelNanda/pile-10k
base_model:
- Qwen/Qwen3-30B-A3B
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### INT4 Inference(CPU/CUDA/INTEL GPU)
```python
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "Intel/Qwen3-30B-A3B-int4-AutoRound-inc"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
##INT4:
# thinking content: <think>
# Okay, the user is asking for a short introduction to large language models. Let me start by defining what they are. I should mention that they're AI systems trained on vast amounts of text data. Then, I need to explain their purpose, like generating human-like text, answering questions, etc.
# I should highlight their key features: large size, which means they have a lot of parameters, and the training data, which is diverse. Maybe mention that they can perform various tasks without needing specific training for each one. Also, it's important to note that they're based on deep learning, specifically neural networks.
# I should also touch on their applications, like in chatbots, content creation, and data analysis. But I need to keep it concise. Maybe mention some examples, like GPT or BERT, but not too detailed. Also, a bit about their limitations, like potential biases or errors, but since it's a short intro, maybe just a brief mention.
# Wait, the user said "short introduction," so I need to be concise. Avoid going into too much technical detail. Make sure the language is simple and accessible. Check for any jargon that might need simplifying. Let me structure it: definition, how they work, key features, applications, and a note on their impact. That should cover it without being too lengthy.
# </think>
# content: A **large language model (LLM)** is an advanced artificial intelligence system designed to understand and generate human-like text by analyzing vast amounts of data. Trained on extensive text corpora, these models learn patterns, grammar, and context to perform tasks like answering questions, writing essays, coding, or even creating art. Their "large" scale refers to the massive number of parameters (millions or billions) that enable complex language understanding. LLMs are built using deep learning techniques, such as transformer architectures, and can adapt to diverse tasks without needing specific training for each one. They power applications like chatbots, virtual assistants, and content creation tools, revolutionizing how humans interact with technology. However, they also raise ethical considerations, such as bias and misinformation, requiring careful oversight.
##BF16:
# thinking content: <think>
# Okay, the user is asking for a short introduction to large language models. Let me start by recalling what I know about them. Large language models, or LLMs, are a type of AI that's trained on vast amounts of text data. They can generate human-like text, answer questions, and perform various language tasks.
# I should mention their size, like the number of parameters, which is a key factor. Maybe explain that they're built using deep learning, specifically neural networks. Also, they're trained on diverse data, which helps them understand different topics and languages.
# Applications are important too. They're used in chatbots, content creation, translation, and more. But I should also note some challenges, like the need for large computational resources and potential issues with bias or misinformation.
# Wait, the user might be a student or someone new to AI. I should keep it simple and avoid jargon. Maybe start with a definition, then key features, applications, and a note on challenges. Make sure it's concise but covers the essentials. Let me check if I'm missing anything. Oh, maybe mention that they can understand context and generate coherent responses. Also, examples like GPT or BERT could be helpful, but since the user asked for a short intro, maybe just refer to them as examples without going into detail. Alright, that should cover it.
# </think>
# content: A **large language model (LLM)** is an advanced artificial intelligence system trained on vast amounts of text data to understand and generate human-like language. These models use deep learning techniques, particularly neural networks, to analyze patterns in text, enabling them to perform tasks like answering questions, writing essays, translating languages, and even coding. Their "large" scale refers to the massive number of parameters (settings) they contain, allowing them to capture complex linguistic structures and context. LLMs like GPT or BERT are widely used in applications such as chatbots, content creation, and data analysis, though they also raise considerations around bias, ethics, and computational resources.
prompt = "9.11和9.8哪个数字大"
##INT4:
# thinking content:
# content: <think>
# 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认这两个数字的结构。9.11和9.8都是小数,对吧?不过可能用户写的时候有没有什么问题?比如9.11是不是可能被误解为9.11,而9.8是9.80?不过通常来说,小数点后的位数不同的话,应该按照数值大小来比较。
# 首先,我应该把这两个数都转换成相同的小数位数,或者直接比较它们的数值。比如,9.11和9.8,可以看成是9.11和9.80。这时候,比较整数部分都是9,所以要看小数部分。小数部分的话,第一位是1和8,对吧?因为9.11的小数部分是0.11,而9.80的小数部分是0.80。这时候,0.80比0.11大,所以9.80比9.11大,也就是9.8比9.11大。
# 不过,可能用户会疑惑,为什么小数点后第二位的11和80比较?或者有没有可能用户把9.11写成9.11,而9.8是9.8,这时候可能需要更仔细地分析。比如,9.8其实可以看作9.80,而9.11是9.11,所以比较的话,小数点后第一位是8和1,显然8比1大,所以9.8更大。
# 不过,也有可能用户对小数的比较不太熟悉,可能需要更详细的解释。比如,先比较整数部分,如果整数部分相同,再比较小数部分。这里整数部分都是9,所以继续比较小数部分。小数部分的话,第一位是1和8,所以直接比较第一位,8比1大,所以9.8更大。即使9.11的小数部分有两位,但第一位已经决定了大小,后面的位数不需要再比较了。
# 不过,也有可能用户会误以为9.11的小数部分是11,而9.8的小数部分是8,所以可能认为11比8大?但其实小数点后的每一位都是单独的位数,第一位是十分位,第二位是百分位。
##BF16:
# thinking content:
# content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I spell the word correctly. Strawberry... S-T-R-A-W-B-E-R-R-Y. Wait, is that right? Let me check again. S-T-R-A-W-B-E-R-R-Y. Yeah, that's correct. Now, I need to count the number of 'r's.
# Let me break it down letter by letter. Starting from the beginning:
# 1. S
# 2. T
# 3. R
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R
# 9. R
# 10. Y
# So, the letters are S, T, R, A, W, B, E, R, R, Y. Now, looking for 'r's. The third letter is R, then the eighth is R, and the ninth is also R. So that's three 'r's? Wait, let me count again. Third letter: R (1), then the eighth: R (2), ninth: R (3). So three 'r's in total. But wait, sometimes people might miss a letter. Let me write them out:
# Position 3: R
# Position 8: R
# Position 9: R
# Yes, that's three. But wait, sometimes when people write "strawberry", they might have a different spelling? No, I think that's the standard. Let me confirm the spelling. Strawberry is spelled S-T-R-A-W-B-E-R-R-Y. So yes, the 'r's are at positions 3, 8, and 9. So three 'r's. But wait, maybe I'm miscounting. Let me write the word again:
# S T R A W B E R R Y
# Breaking it down:
# S (1)
# T (2)
# R (3)
# A (4)
# W (5)
# B (6)
# E (7)
# R (8)
# R (9)
# Y (10)
# So positions 3, 8, and 9 are 'r's. That's three. So the answer should be 3. But I want to make sure I'm not missing any. Let me check another way. Maybe write the word and underline the 'r's:
# S T **R** A W B E **R** **R** Y
# Yes, three
prompt = "How many r in word strawberry"
##INT4:
# thinking content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, I need to check each letter one by one.
# First, I'll write out the word: S-T-R-A-W-B-E-R-R-Y. Let me count each letter. Starting from the beginning:
# S - that's the first letter, not an 'r'.
# T - second, also not.
# R - third letter, that's one 'r'.
# A - fourth, nope.
# W - fifth, no.
# B - sixth, no.
# E - seventh, no.
# R - eighth, that's the second 'r'.
# R - ninth, third 'r'.
# Y - tenth, no.
# Wait, so that's three 'r's? Let me double-check. S-T-R-A-W-B-E-R-R-Y. Yes, the third letter is R, then the eighth and ninth letters are both R. So that's three 'r's in total. But wait, sometimes people might miss a letter. Let me write it again:
# S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). So positions 3, 8, and 9. That's three 'r's. So the answer should be 3. But maybe I should check if there's any other 'r' I missed. Let me spell the word again: S-T-R-A-W-B-E-R-R-Y. No, there's no other 'r' in there. So the answer is three.
# </think>
# content: The word **"strawberry"** contains **3** instances of the letter **"r"**.
# Here's the breakdown:
# - **S**
# - **T**
# - **R** (1st **r**)
# - **A**
# - **W**
# - **B**
# - **E**
# - **R** (2nd **r**)
# - **R** (3rd **r**)
# - **Y**
# **Answer:** 3.
##BF16:
# thinking content:
# content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I have the correct spelling of the word. Let me write it out: S-T-R-A-W-B-E-R-R-Y. Wait, is that right? Let me check again. Strawberry... yes, that's correct. Now, I need to count the number of 'r's in that spelling.
# Let me go through each letter one by one. Starting with the first letter: S. Not an 'r'. Next is T. Still not. Then R. Okay, that's one. Then A, W, B, E. So far, only one 'r'. Then comes the next letters: R. That's the second 'r'. Then another R. Wait, is there a third 'r'? Let me check again. The word is S-T-R-A-W-B-E-R-R-Y. So after the first R, there's a B, E, then two R's. So that's two R's? Wait, no. Let me count again. Let's break it down:
# 1. S
# 2. T
# 3. R (1st)
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R (2nd)
# 9. R (3rd)
# 10. Y
# Wait, so the letters are S, T, R, A, W, B, E, R, R, Y. So the 'r's are at positions 3, 8, and 9. That's three 'r's. But wait, maybe I miscounted. Let me write it out again:
# S-T-R-A-W-B-E-R-R-Y. So after the first R (position 3), then the next letters are A, W, B, E, then R (position 8), then another R (position 9). So that's three R's. But sometimes people might miss the second R. Let me check again. The word is strawberry. Let me spell it again: S-T-R-A-W-B-E-R-R-Y. Yes, that's correct. So the R's are in the third, eighth, and ninth positions. Therefore, there are three 'r's. Wait, but I thought maybe the answer was two. Maybe I need to confirm. Let me check an online source or think of another way. Alternatively, maybe
prompt = "请简短介绍一下阿里巴巴公司"
##INT4:
# thinking content: <think>
# 好的,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。可能他们需要一个快速的概述,所以得保持简洁。阿里巴巴成立于1999年,由马云和其他团队创立,这点很重要,必须提到。
# 接下来,公司主要的业务是什么?淘宝、天猫这些电商平台是核心,还有阿里云,这是他们的重要部分。可能用户对云计算也感兴趣,所以得包括进去。另外,支付宝作为支付工具,也是阿里巴巴生态系统的一部分,应该提到。
# 然后,公司的影响力和地位。作为全球最大的电商平台之一,可能需要强调其规模和市场地位。还有,阿里巴巴的子公司和业务扩展,比如菜鸟网络、阿里健康等,但可能用户只需要简短,所以可能不需要展开。
# 用户可能想知道阿里巴巴的总部在哪里,但问题中没有提到,所以可能不需要。不过,如果有必要,可以加上。另外,成立年份和创始人是关键信息,必须准确。
# 需要检查是否有遗漏的重要信息,比如上市情况,阿里巴巴在纽交所上市,但可能用户不需要这么详细。保持简短的话,可能不需要。最后,确保语言流畅,信息准确,没有错误。
# </think>
# content: 阿里巴巴集团(Alibaba Group)是全球领先的电子商务和科技公司,成立于1999年,由马云及其团队在中国杭州创立。其核心业务包括电子商务平台(如淘宝、天猫)、云计算服务(阿里云)、数字支付(支付宝)、物流(菜鸟网络)及金融科技等。阿里巴巴通过创新和技术驱动,已成为中国乃至全球最具影响力的互联网企业之一。
##BF16:
# thinking content: <think>
# 好的,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。他们可能对阿里巴巴不太了解,或者需要快速了解其基本情况。简短介绍的话,应该包括成立时间、创始人、总部地点、主要业务和影响力。
# 接下来,我得确认阿里巴巴的成立时间。我记得是1999年,由马云和他的团队创立。总部在杭州,这点没错。主要业务方面,阿里巴巴集团现在涵盖电商、金融、物流、云计算等多个领域。比如淘宝、天猫是电商平台,蚂蚁金服是金融,菜鸟网络是物流,阿里云是云计算。这些都需要提到。
# 另外,可能需要提到它的全球影响力,比如成为全球最大的电商平台之一,以及对中国经济的贡献。不过要保持简短,不能太详细。还要注意不要遗漏重要子公司或业务板块,但也不能太冗长。
# 用户可能还想知道阿里巴巴的上市情况,比如在纽交所上市的时间,但可能不需要太详细,因为问题只是简短介绍。另外,是否需要提到最近的动态?比如最近的业务调整或者新项目?不过简短介绍的话,可能不需要,保持基础信息即可。
# 需要检查是否有错误信息,比如成立年份是否正确,创始人是否正确,总部是否在杭州。确认无误后,组织语言,确保简洁明了,不超过两到三句话。可能还需要提到其子公司和业务范围,但用简短的列举方式,比如“涵盖电子商务、金融、物流、云计算等领域”。
# </think>
# content: 阿里巴巴集团(Alibaba Group)成立于1999年,由马云及其团队在杭州创立,是全球领先的电子商务和科技公司。其核心业务包括电商平台(如淘宝、天猫)、金融科技(蚂蚁集团)、物流(菜鸟网络)及云计算(阿里云)等,致力于通过数字技术推动全球商业发展,已成为中国最具影响力的互联网企业之一。
```
### Generate the model
Here is the sample command to generate the model.
```bash
auto-round-best \
--model Qwen/Qwen3-30B-A3B \
--device 0 \
--group_size 128 \
--bits 4 \
--format 'auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
yusrilfalih/llama2-MIMICiii-lora-finetunned-1K-v1 | yusrilfalih | 2025-05-27T05:07:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-05-27T05:02:56Z | ---
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
declare-lab/PathFinder-PRM-7B | declare-lab | 2025-05-27T04:55:36Z | 0 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"math",
"reasoning",
"text-classification",
"en",
"dataset:declare-lab/PathFinder-600K",
"arxiv:2505.19706",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-26T13:40:54Z | ---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-Math-7B-Instruct
pipeline_tag: text-classification
library_name: transformers
datasets:
- declare-lab/PathFinder-600K
tags:
- math
- reasoning
---
# PathFinder-PRM-7B
<div align="center">
<img src="images/PathFinder.png" width="300">
</div>
## Introduction
PathFinder-PRM-7B is a hierarchical discriminative Process Reward Model (PRM) designed to identify errors and reward correct math reasoning in multi-step outputs from large language models (LLMs). Instead of treating evaluation as a single correct-or-wrong decision, PathFinder-PRM-7B breaks down its error judgment into 2 parts: whether the reasoning is mathematically correct, and logically consistent. It predicts these aspects separately and then combines them to decide if the current reasoning steps leads to a correct final solution. PathFinder-PRM-7B is trained on a combination of high-quality human annotated data (PRM800K) and additional automatically annotated samples, enabling robustness to common failure patterns and strong generalization across diverse benchmarks such as ProcessBench and PRMBench.
## Model Details
### Model Description
- **Model type:** Process Reward Model
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** Qwen/Qwen2.5-Math-7B-Instruct
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/declare-lab/PathFinder-PRM
- **Paper:** https://arxiv.org/abs/2505.19706
For more details, please refer to our paper and Github repository.
## Usage
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### 🤗 Hugging Face Transformers
Here we show a code snippet to show you how to use the PathFinder-PRM-7B with transformers:
```python
import torch
from transformers import AutoModel, AutoTokenizer
import torch.nn.functional as F
model_name = "declare-lab/PathFinder-PRM-7B"
device = "auto"
PROMPT_PREFIX = "You are a Math Teacher. Given a question and a student's solution, evaluate the mathemetical correctness, logic consistency of the current step and whether it will lead to the correct final solution"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(
model_name,
device_map=device,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
attn_implementation = "flash_attention_2",
).eval()
pos_token_id = tokenizer.encode("<+>")[0]
neg_token_id = tokenizer.encode("<->")[0]
def run_inference(sample_input):
message_ids = tokenizer.apply_chat_template(
sample_input,
tokenize=True,
return_dict=True,
return_tensors='pt'
).to(model.device)
mask_token_id = tokenizer.encode("<extra>")[0]
token_masks = (message_ids['input_ids'] == mask_token_id)
shifted_mask = torch.cat(
[
token_masks[:, 1:],
torch.zeros(token_masks.size(0), 1, dtype=torch.bool, device=model.device)
],
dim=1
)
# 1st Forward Pass
with torch.no_grad():
outputs = model(**message_ids)
allowed_token_ids = torch.tensor([pos_token_id, neg_token_id], device=outputs.logits.device)
masked_logits = outputs.logits[shifted_mask][:, allowed_token_ids]
predicted_indices = masked_logits.argmax(dim=-1)
predicted_tokens = allowed_token_ids[predicted_indices]
decoded_tokens = [tokenizer.decode([int(token_id)], skip_special_tokens=False) for token_id in predicted_tokens]
if '<->' in decoded_tokens:
# error found in step
return -1
# preparing input for 2nd Forward Pass
new_messages = sample_input.copy()
asst_response = new_messages[-1]['content']
# replacing mask tokens with pred tokens for math and consistency
for pred in decoded_tokens:
asst_response = asst_response.replace("<extra>", pred, 1)
asst_response += ', Correctness: <extra>'
new_messages[-1]['content'] = asst_response
new_message_ids = tokenizer.apply_chat_template(
new_messages,
tokenize=True,
return_dict=True,
return_tensors='pt'
).to(model.device)
token_masks = (new_message_ids['input_ids'] == mask_token_id)
shifted_mask = torch.cat(
[
token_masks[:, 1:],
torch.zeros(token_masks.size(0), 1, dtype=torch.bool, device=model.device)
],
dim=1
)
# 2nd Forward Pass
with torch.no_grad():
outputs = model(**new_message_ids)
masked_logits = outputs.logits[shifted_mask]
restricted_logits = masked_logits[:, [pos_token_id, neg_token_id]]
probs_pos_neg = F.softmax(restricted_logits, dim=-1)
return probs_pos_neg[0][0].cpu().item()
question = "Sue lives in a fun neighborhood. One weekend, the neighbors decided to play a prank on Sue. On Friday morning, the neighbors placed 18 pink plastic flamingos out on Sue's front yard. On Saturday morning, the neighbors took back one third of the flamingos, painted them white, and put these newly painted white flamingos back out on Sue's front yard. Then, on Sunday morning, they added another 18 pink plastic flamingos to the collection. At noon on Sunday, how many more pink plastic flamingos were out than white plastic flamingos?"
prev_steps = [ "To find out how many more pink plastic flamingos were out than white plastic flamingos at noon on Sunday, we can break down the problem into steps. First, on Friday, the neighbors start with 18 pink plastic flamingos.",
"On Saturday, they take back one third of the flamingos. Since there were 18 flamingos, (1/3 \\times 18 = 6) flamingos are taken back. So, they have (18 - 6 = 12) flamingos left in their possession. Then, they paint these 6 flamingos white and put them back out on Sue's front yard. Now, Sue has the original 12 pink flamingos plus the 6 new white ones. Thus, by the end of Saturday, Sue has (12 + 6 = 18) pink flamingos and 6 white flamingos.",
"On Sunday, the neighbors add another 18 pink plastic flamingos to Sue's front yard. By the end of Sunday morning, Sue has (18 + 18 = 36) pink flamingos and still 6 white flamingos."]
curr_step = "To find the difference, subtract the number of white flamingos from the number of pink flamingos: (36 - 6 = 30). Therefore, at noon on Sunday, there were 30 more pink plastic flamingos out than white plastic flamingos. The answer is (\\boxed{30})."
prev_steps_str = "\n\n".join(prev_steps)
messages = [
{"role": "user", "content": PROMPT_PREFIX + "\n\n Question: "+ question},
{"role": "assistant", "content": prev_steps_str + "\n\nCurrent Step: " + now_step +" Math reasoning: <extra>, Consistency: <extra>"},
]
reward_score = run_inference(messages)
```
## Evaluation
#### Evalaution Benchmarks
- [**ProcessBench**](https://huggingface.co/datasets/Qwen/ProcessBench)
- [**PRMBench**](https://github.com/ssmisya/PRMBench)
- [**Reward-Guided Greedy Search**](https://github.com/NJUNLP/R-PRM/tree/main/src/datasets)
- [MATH500](https://huggingface.co/datasets/HuggingFaceH4/MATH-500)
- [AIME24](https://huggingface.co/datasets/math-ai/aime24)
- [AMC23](https://huggingface.co/datasets/math-ai/amc23)
- [Minerva Math](https://huggingface.co/datasets/math-ai/minervamath)
- [Olympiad Bench](https://huggingface.co/datasets/Hothan/OlympiadBench)
- [College Math](https://huggingface.co/datasets/realtreetune/college_math)
### Results

#### PRMBench Results
| Model | Simplicity | Soundness | Sensitivity | Overall |
|----------------------------------|------------|-----------|-------------|---------|
| **LLM-as-judge, Proprietary Language Models** | | | | |
| Gemini-2.0-thinking-exp-1219 | 66.2 | 71.8 | 75.3 | 68.8 |
| GPT-4o | 59.7 | 70.9 | 75.8 | 66.8 |
| **LLM-as-judge, Open-source Language Models** | | | | |
| Qwen-2.5-Math-72B | 55.1 | 61.1 | 67.1 | 57.4 |
| QwQ-Preview-32B | 56.4 | 68.2 | 73.5 | 63.6 |
| **Discriminative Process Reward Models** | | | | |
| Math-Shepherd-7B | 47.1 | 45.7 | 60.7 | 47.0 |
| Math-PSA-7B | 51.3 | 51.8 | 64.9 | 52.3 |
| RLHFlow-Mistral-8B | 46.7 | 57.5 | 68.5 | 54.4 |
| Lemma-PRM800k-7B | 51.4 | 50.9 | 66.0 | 52.0 |
| ReasonEval-7B | 55.5 | 63.9 | 71.0 | 60.0 |
| Qwen2.5-Math-PRM-7B | 52.1 | **71.0** | 75.5 | 65.5 |
| 🟢 PathFinder-PRM-7B | **58.9** | 70.8 | **76.9** | **67.7** |
Note: Simplicity, Soundness, and Sensitivity are averaged sub-metrics from PRMBench. Our model, PathFinder-PRM-7B, outperforms all open-source discriminative PRMs and LLM-as-judge models, while achieving competitive performance compared to large proprietary models.
#### ProcessBench Results
| Model | # Samples | GSM8K | MATH | Olympiad | OmniMath | Avg. F1 |
|-------------------------------|-----------|-------|-------|----------|----------|---------|
| Math-Shepherd-7B | 445K | 47.9 | 29.5 | 24.8 | 23.8 | 31.5 |
| RLHFlow-Mistral-8B | 273K | 50.4 | 33.4 | 13.8 | 15.8 | 28.4 |
| Llemma-PRM800K-7B | ~350K | 48.4 | 43.1 | 28.5 | 33.4 | 38.4 |
| Qwen2.5-Math-7B-PRM800K | 264K | 68.2 | 62.6 | 50.7 | 44.3 | 58.5 |
| 🟢 PathFinder-PRM-7B | ~400K | 77.9 | 75.3 | 65.0 | 59.7 | 69.5 |
| Qwen2.5-Math-PRM-7B | ~1.5M | 82.4 | 77.6 | 67.5 | 66.3 | 73.5 |
PathFinder-PRM-7B outperforms models trained on similar data sizes on ProcessBench but performs 4 points worse compared to Qwen2.5-Math-PRM-7B which was trained with 3x more data.
### Reward-Guided Greedy Search (PRM@8)
| Model | AIME24 | AMC23 | MATH | Olympiad | College | Minerva | Avg |
|------------------------------|--------|-------|-------|----------|---------|---------|-------|
| Math-Shepherd-7B | 13.3 | 52.5 | 74.6 | 38.5 | 36.5 | 41.2 | 42.8 |
| Math-PSA-7B | 6.7 | 57.5 | 79.8 | 42.5 | 41.0 | 39.3 | 44.5 |
| Skywork-PRM-7B | 10.0 | 57.5 | 77.8 | 41.5 | 39.0 | **43.4** | 44.9 |
| Qwen2.5-Math-PRM-7B | 16.7 | 60.0 | **81.0** | **43.5** | 39.0 | 40.4 | 46.8 |
| 🟢 PathFinder-PRM-7B | **20.0** | **62.5** | 78.8 | 36.5 | **55.0** | 36.7 | **48.3** |
Note: All results are computed using reward-guided greedy search with Qwen2.5‑7B‑Instruct as the policy model. PathFinder-PRM-7B, outperforms all open-source discriminative PRMs in Reward-Guided Greedy Search showcasing its ability to better guide policy models towards correct solutions
## Citation
```bibtex
@misc{pala2025errortypingsmarterrewards,
title={Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision},
author={Tej Deep Pala and Panshul Sharma and Amir Zadeh and Chuan Li and Soujanya Poria},
year={2025},
eprint={2505.19706},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.19706},
}
``` |
lisabdunlap/Qwen3-8B-base-5e5 | lisabdunlap | 2025-05-27T04:50:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-27T04:49:17Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
izzcw/mini_llama_crafting_sft_success_new_mem | izzcw | 2025-05-27T00:17:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T23:11:05Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: mini_llama_crafting_sft_success_new_mem
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini_llama_crafting_sft_success_new_mem
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the identity and the crafting_sft_success_new_mem datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8427 | 0.3380 | 50 | 1.1575 |
| 0.5411 | 0.6760 | 100 | 0.5065 |
| 0.519 | 1.0203 | 150 | 0.4361 |
| 0.3662 | 1.3583 | 200 | 0.4007 |
| 0.3679 | 1.6962 | 250 | 0.3948 |
| 0.3176 | 2.0406 | 300 | 0.3846 |
| 0.2141 | 2.3785 | 350 | 0.4076 |
| 0.2089 | 2.7165 | 400 | 0.3996 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
WaveCut/QwenLong-L1-32B-mlx-8Bit | WaveCut | 2025-05-26T23:45:39Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"long-context",
"large-reasoning-model",
"mlx-my-repo",
"dataset:Tongyi-Zhiwen/DocQA-RL-1.6K",
"base_model:Tongyi-Zhiwen/QwenLong-L1-32B",
"base_model:quantized:Tongyi-Zhiwen/QwenLong-L1-32B",
"license:apache-2.0",
"8-bit",
"region:us"
] | null | 2025-05-26T23:44:05Z | ---
license: apache-2.0
datasets:
- Tongyi-Zhiwen/DocQA-RL-1.6K
base_model: Tongyi-Zhiwen/QwenLong-L1-32B
tags:
- long-context
- large-reasoning-model
- mlx
- mlx-my-repo
---
# WaveCut/QwenLong-L1-32B-mlx-8Bit
The Model [WaveCut/QwenLong-L1-32B-mlx-8Bit](https://huggingface.co/WaveCut/QwenLong-L1-32B-mlx-8Bit) was converted to MLX format from [Tongyi-Zhiwen/QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("WaveCut/QwenLong-L1-32B-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Santiagoescamilla/gabomockups | Santiagoescamilla | 2025-05-26T23:37:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-26T23:00:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: gabomockups
---
# Gabomockups
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `gabomockups` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "gabomockups",
"lora_weights": "https://huggingface.co/Santiagoescamilla/gabomockups/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Santiagoescamilla/gabomockups', weight_name='lora.safetensors')
image = pipeline('gabomockups').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1250
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Santiagoescamilla/gabomockups/discussions) to add images that show off what you’ve made with this LoRA.
|
lsalsi/default_multi_species_2kb_sh_gc | lsalsi | 2025-05-26T23:33:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"esm",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-26T22:38:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
beanne-valerie-viral-video/Link.Ng.Video.beanne.dela.cruz.and.patrick.video.hebeoh.beanne.valerie.dela.cruz.viral.scandal | beanne-valerie-viral-video | 2025-05-26T17:34:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T17:34:11Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?beanne-valerie)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?beanne-valerie)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?beanne-valerie) |
suwonpabby/gemma-3-1b-it | suwonpabby | 2025-05-26T17:09:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T12:10:21Z | ---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma-3-1b-it
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-1b-it
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="suwonpabby/gemma-3-1b-it", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
NFX74/MNLP_M2_document_encoder | NFX74 | 2025-05-26T17:05:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-26T17:04:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felineogil/Felineogil | Felineogil | 2025-05-26T16:36:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T16:36:30Z | ---
license: apache-2.0
---
|
samoline/6fdf07a0-f0a9-4b7b-a89d-f1260545e05c | samoline | 2025-05-26T15:58:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Maykeye/TinyLLama-v0",
"base_model:finetune:Maykeye/TinyLLama-v0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T15:58:26Z | ---
base_model: Maykeye/TinyLLama-v0
library_name: transformers
model_name: 6fdf07a0-f0a9-4b7b-a89d-f1260545e05c
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 6fdf07a0-f0a9-4b7b-a89d-f1260545e05c
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="samoline/6fdf07a0-f0a9-4b7b-a89d-f1260545e05c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/samoline-nan/Gradients-On-Demand/runs/qk48yus7)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
paraphraser-models/bart-cultural-rewriter-Type_1_High_PDI___High_IDV___High_UAI_gpt4o_raw_vs_adjusted | paraphraser-models | 2025-05-26T11:46:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-26T11:45:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tscstudios/hxvs7usvh9ephlz63pexfp2ovkj2_c4fe8aaa-cfac-47c1-b692-9df7fdf1d673 | tscstudios | 2025-05-26T11:37:03Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-26T11:37:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Hxvs7Usvh9Ephlz63Pexfp2Ovkj2_C4Fe8Aaa Cfac 47C1 B692 9Df7Fdf1D673
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/hxvs7usvh9ephlz63pexfp2ovkj2_c4fe8aaa-cfac-47c1-b692-9df7fdf1d673/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/hxvs7usvh9ephlz63pexfp2ovkj2_c4fe8aaa-cfac-47c1-b692-9df7fdf1d673', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/hxvs7usvh9ephlz63pexfp2ovkj2_c4fe8aaa-cfac-47c1-b692-9df7fdf1d673/discussions) to add images that show off what you’ve made with this LoRA.
|
luren87/artale_chatbot | luren87 | 2025-05-26T11:30:08Z | 0 | 0 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-26T11:16:50Z | ---
license: other
license_name: test
license_link: LICENSE
---
|
focusqueenfq/2000BrideFQ | focusqueenfq | 2025-05-26T11:00:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-26T10:39:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 2000BrideFQ
---
# 2000Bridefq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `2000BrideFQ` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "2000BrideFQ",
"lora_weights": "https://huggingface.co/focusqueenfq/2000BrideFQ/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('focusqueenfq/2000BrideFQ', weight_name='lora.safetensors')
image = pipeline('2000BrideFQ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/focusqueenfq/2000BrideFQ/discussions) to add images that show off what you’ve made with this LoRA.
|
LandCruiser/sn29_cold_2605_14 | LandCruiser | 2025-05-26T10:24:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T01:56:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/Kimi-VL-A3B-Thinking-8bit | mlx-community | 2025-05-26T10:24:09Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"kimi_vl",
"feature-extraction",
"internvl",
"custom_code",
"mlx",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-1B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-1B-Instruct",
"license:other",
"region:us"
] | image-text-to-text | 2025-04-17T14:24:42Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-1B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- mlx
---
# mlx-community/Kimi-VL-A3B-Thinking-8bit
This model was converted to MLX format from [`moonshotai/Kimi-VL-A3B-Thinking`]() using mlx-vlm version **0.1.23**.
Refer to the [original model card](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Kimi-VL-A3B-Thinking-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
tartuNLP/Llammas-base-p1-GPT-4o-human-error-pseudo-m2 | tartuNLP | 2025-05-26T09:38:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"base_model:tartuNLP/Llammas-base",
"base_model:finetune:tartuNLP/Llammas-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-14T06:07:41Z | ---
library_name: transformers
base_model:
- tartuNLP/Llammas-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sp-embraceable/Phi4-FT-unsloth-runpod-2500steps-e1-above90-adapter | sp-embraceable | 2025-05-26T09:05:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/phi-4",
"base_model:adapter:unsloth/phi-4",
"region:us"
] | null | 2025-05-26T09:01:31Z | ---
base_model: unsloth/Phi-4
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Datasmartly/nllb-tamazight-finetunedmixe1 | Datasmartly | 2025-05-26T09:04:30Z | 0 | 0 | null | [
"safetensors",
"m2m_100",
"generated_from_trainer",
"base_model:facebook/nllb-200-3.3B",
"base_model:finetune:facebook/nllb-200-3.3B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-26T08:49:01Z | ---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-3.3B
tags:
- generated_from_trainer
model-index:
- name: nllb-tamazight-finetunedmixe1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-tamazight-finetunedmixe1
This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0406 | 1.0 | 225 | 0.7700 |
| 0.1517 | 2.0 | 450 | 0.1944 |
| 0.0553 | 3.0 | 675 | 0.2100 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.4.1+cu124
- Datasets 3.6.0
- Tokenizers 0.15.2
|
Nana95/aimodel | Nana95 | 2025-05-26T08:57:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-26T08:43:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: aimodel
---
# Aimodel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `aimodel` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "aimodel",
"lora_weights": "https://huggingface.co/Nana95/aimodel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nana95/aimodel', weight_name='lora.safetensors')
image = pipeline('aimodel').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nana95/aimodel/discussions) to add images that show off what you’ve made with this LoRA.
|
g-assismoraes/gemma-3-4b-it-fpi-alpha2.0-fromit-var-agnews | g-assismoraes | 2025-05-26T05:57:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-26T05:53:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeongseokoh/llama3-8b-with-conclusion-Alphabet_False_Multiple3_aggr_last_starting_with_inst_analyzer | jeongseokoh | 2025-05-26T05:25:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:18:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
g-assismoraes/gemma-3-1b-it-agnews | g-assismoraes | 2025-05-26T05:16:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T02:33:58Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-3-1b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-3-1b-it-agnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-3-1b-it-agnews
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1073 | 1.0 | 27000 | 1.1091 |
| 1.0571 | 2.0 | 54000 | 1.1085 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
bigband/VisionaryPoseidon | bigband | 2025-05-26T00:41:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T00:32:03Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
raghadabusnayma/tinyllama-rickiestrick-chatbot | raghadabusnayma | 2025-05-26T00:39:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T00:39:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
g-assismoraes/gemma-3-4b-it-fpi-alpha3.0-fromit-var-hatebr | g-assismoraes | 2025-05-25T23:40:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-25T23:36:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unrented5443/sn11-v2-5 | unrented5443 | 2025-05-25T21:34:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T21:34:43Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
ruixuan-zhang/nanoVLM | ruixuan-zhang | 2025-05-25T17:56:23Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-25T17:55:58Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("ruixuan-zhang/nanoVLM")
```
|
lamphuc1603/t5-lora-vietnamese | lamphuc1603 | 2025-05-25T17:53:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google-t5/t5-base",
"base_model:adapter:google-t5/t5-base",
"region:us"
] | null | 2025-05-25T17:44:51Z | ---
base_model: t5-base
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
deswaq/alfa9 | deswaq | 2025-05-25T17:47:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T17:42:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/cond_emotions_ntr_25_nte_80_preamble_1proxy_cont_20250525_164022 | gradientrouting-spar | 2025-05-25T17:33:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T17:31:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PepitaxX/qwen3-0.6B-openQA | PepitaxX | 2025-05-25T15:07:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T15:02:36Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
exiort/model | exiort | 2025-05-25T14:40:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"region:us"
] | null | 2025-05-20T21:35:30Z | ---
base_model: google/gemma-3-12b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
HOT-VIDEO-Katrina-Lim-Viral-Kiffy/Katrina.Lim.Viral.Video.link.Official | HOT-VIDEO-Katrina-Lim-Viral-Kiffy | 2025-05-25T12:13:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-25T12:12:43Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
MusYW/my_awesome_qa_model | MusYW | 2025-05-25T11:48:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-05-25T11:48:39Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4173 |
| 2.8101 | 2.0 | 500 | 1.7850 |
| 2.8101 | 3.0 | 750 | 1.7042 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF | mradermacher | 2025-05-25T09:45:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:Jeremmmyyyyy/Qwen-poetry-logprob-no-norm-v3",
"base_model:quantized:Jeremmmyyyyy/Qwen-poetry-logprob-no-norm-v3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-25T09:34:35Z | ---
base_model: Jeremmmyyyyy/Qwen-poetry-logprob-no-norm-v3
language:
- en
library_name: transformers
model_name: Qwen-poetry-logprob-no-norm-v3
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jeremmmyyyyy/Qwen-poetry-logprob-no-norm-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-poetry-logprob-no-norm-v3-GGUF/resolve/main/Qwen-poetry-logprob-no-norm-v3.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
thucdangvan020999/ultravox_ckpt500_merged | thucdangvan020999 | 2025-05-25T08:43:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-05-25T08:43:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DialoGPT-Elysia-GGUF | mradermacher | 2025-05-25T00:27:03Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"en",
"base_model:Jaszii/DialoGPT-Elysia",
"base_model:quantized:Jaszii/DialoGPT-Elysia",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T00:24:40Z | ---
base_model: Jaszii/DialoGPT-Elysia
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jaszii/DialoGPT-Elysia
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DialoGPT-Elysia-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DialoGPT-Elysia-GGUF/resolve/main/DialoGPT-Elysia.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
vslinx/LoRA-Collection | vslinx | 2025-05-25T00:07:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T17:39:47Z | ### This is a backup collection of all my models released on [civitai](https://civitai.com/user/vslinx) |
Cherran/medical_gemma_1b_sft | Cherran | 2025-05-24T18:22:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-05-24T18:21:43Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
FizzyMango/whisper_szokz | FizzyMango | 2025-05-24T07:56:45Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T07:53:38Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/vice-headlines-GGUF | mradermacher | 2025-05-23T20:46:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:marcderbauer/vice-headlines",
"base_model:quantized:marcderbauer/vice-headlines",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T20:42:15Z | ---
base_model: marcderbauer/vice-headlines
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/marcderbauer/vice-headlines
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/vice-headlines-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/vice-headlines-GGUF/resolve/main/vice-headlines.f16.gguf) | f16 | 1.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Martinalexd80/Sophie.Rain.Spiderman.Viral.Full.Video.Tutorial | Martinalexd80 | 2025-05-23T16:44:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-23T15:58:27Z | 3 Minutes ago — Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
<a href="https://t.co/7273tiVxKL?v=primevideo" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://t.co/7273tiVxKL?v=primevideo" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://t.co/7273tiVxKL?v=primevideo"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
In the ever evolving landscape of celebrity culture, the Ishowspeedscandal underscores the relentless pursuit of sensationalism, a pursuit that often comes at the expense of truth and dignity. As we navigate the complexities of the digital age, the line between entertainment and exploitation remains perilously thin.
The recurrent theme of Leaked tapes and the subsequent fallout serves as a reminder of the fragility of reputation in the digital era. As the lines between private and public life continue to blur, celebrities like Prison Officerfind themselves at the mercy of internet chatter, where a rumor can ignite a firestorm of speculation and judgment
As the situation unfolds, the truth remains shrouded in mystery, leaving the public to ponder the authenticity of the rumors. In a world where fame and infamy are two sides of the same coin, the saga of Ishowspeedis a testament to the power of social media to shape narratives and challenge the boundaries of privacy and consent
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video |
mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF | mradermacher | 2025-05-23T12:00:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"conversational",
"en",
"base_model:allura-org/Q3-30b-A3b-Pentiment",
"base_model:quantized:allura-org/Q3-30b-A3b-Pentiment",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-23T06:18:08Z | ---
base_model: allura-org/Q3-30b-A3b-Pentiment
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- roleplay
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allura-org/Q3-30b-A3b-Pentiment
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-30b-A3b-Pentiment-i1-GGUF/resolve/main/Q3-30b-A3b-Pentiment.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jordinia/NetPro-Qwen3-1.7B-ClfDC | jordinia | 2025-05-22T18:21:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B-Base",
"base_model:finetune:unsloth/Qwen3-1.7B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T18:21:34Z | ---
base_model: unsloth/Qwen3-1.7B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jordinia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tdooms/svhn-l2 | tdooms | 2025-05-22T18:02:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T18:02:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xbinbin/DSR1-Llama8B-0-2000text_4.3.model | xbinbin | 2025-04-03T08:09:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T08:08:55Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xbinbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
12ill/Llama-3.2-3B-finetuned | 12ill | 2025-04-03T08:09:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T08:06:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lara1510/gemma-3-12b-lora | lara1510 | 2025-04-03T08:04:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-04-03T08:04:26Z | ---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Jollyfish/wlgv3t-new-fold4-26-3-3 | Jollyfish | 2025-04-03T08:04:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-03T07:55:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Savoxism/Finetuned-Taxi-v3 | Savoxism | 2025-04-03T08:04:16Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T08:04:14Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Finetuned-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Savoxism/Finetuned-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf | RichardErkhov | 2025-04-03T08:04:06Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T06:46:17Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3.5-mini-instruct-rag-score-generator - GGUF
- Model creator: https://huggingface.co/gutsartificial/
- Original model: https://huggingface.co/gutsartificial/Phi-3.5-mini-instruct-rag-score-generator/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3.5-mini-instruct-rag-score-generator.Q2_K.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi-3.5-mini-instruct-rag-score-generator.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [Phi-3.5-mini-instruct-rag-score-generator.IQ3_S.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3.5-mini-instruct-rag-score-generator.IQ3_M.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q3_K.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi-3.5-mini-instruct-rag-score-generator.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q4_0.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3.5-mini-instruct-rag-score-generator.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q4_K.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q4_1.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q5_0.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q5_K.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q5_1.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q6_K.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi-3.5-mini-instruct-rag-score-generator.Q8_0.gguf](https://huggingface.co/RichardErkhov/gutsartificial_-_Phi-3.5-mini-instruct-rag-score-generator-gguf/blob/main/Phi-3.5-mini-instruct-rag-score-generator.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gutsartificial
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/EITD_-_phi_2-gguf | RichardErkhov | 2025-04-03T08:03:50Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T07:25:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi_2 - GGUF
- Model creator: https://huggingface.co/EITD/
- Original model: https://huggingface.co/EITD/phi_2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi_2.Q2_K.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q2_K.gguf) | Q2_K | 1.35GB |
| [phi_2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [phi_2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [phi_2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [phi_2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [phi_2.Q3_K.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q3_K.gguf) | Q3_K | 1.75GB |
| [phi_2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [phi_2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [phi_2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [phi_2.Q4_0.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q4_0.gguf) | Q4_0 | 2.03GB |
| [phi_2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [phi_2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [phi_2.Q4_K.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q4_K.gguf) | Q4_K | 2.16GB |
| [phi_2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [phi_2.Q4_1.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q4_1.gguf) | Q4_1 | 2.24GB |
| [phi_2.Q5_0.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q5_0.gguf) | Q5_0 | 2.46GB |
| [phi_2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [phi_2.Q5_K.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q5_K.gguf) | Q5_K | 2.53GB |
| [phi_2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [phi_2.Q5_1.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q5_1.gguf) | Q5_1 | 2.68GB |
| [phi_2.Q6_K.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q6_K.gguf) | Q6_K | 2.92GB |
| [phi_2.Q8_0.gguf](https://huggingface.co/RichardErkhov/EITD_-_phi_2-gguf/blob/main/phi_2.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EITD
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/google-gemma-2-2b-it-bnb-8bit-smashed | PrunaAI | 2025-04-03T08:03:33Z | 4 | 0 | null | [
"safetensors",
"gemma2",
"pruna-ai",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-19T10:29:34Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/google-gemma-2-2b-it-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/Dreamer-7B-Reddit-GGUF | mradermacher | 2025-04-03T08:01:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:osunlp/Dreamer-7B-Reddit",
"base_model:quantized:osunlp/Dreamer-7B-Reddit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T07:51:37Z | ---
base_model: osunlp/Dreamer-7B-Reddit
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/osunlp/Dreamer-7B-Reddit
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | vision supplement |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dreamer-7B-Reddit-GGUF/resolve/main/Dreamer-7B-Reddit.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/text2vec-large-chinese-GGUF | mradermacher | 2025-04-03T08:01:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text2vec",
"feature-extraction",
"sentence-similarity",
"zh",
"base_model:GanymedeNil/text2vec-large-chinese",
"base_model:quantized:GanymedeNil/text2vec-large-chinese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-03T07:53:48Z | ---
base_model: GanymedeNil/text2vec-large-chinese
language:
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text2vec
- feature-extraction
- sentence-similarity
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GanymedeNil/text2vec-large-chinese
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/text2vec-large-chinese-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/text2vec-large-chinese-GGUF/resolve/main/text2vec-large-chinese.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Inabia-AI/Kymera_germany_standalone_lora_3.1_2025_04_03_06_24_36 | Inabia-AI | 2025-04-03T06:26:34Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T06:26:07Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Inabia-AI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
memeviss/PLM-x_4 | memeviss | 2025-04-03T06:25:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T06:14:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memeviss/PLM-x_3 | memeviss | 2025-04-03T06:25:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T06:14:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf | RichardErkhov | 2025-04-03T06:22:16Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T04:59:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3.5-mini-quote-planName-SFT-PLNYR - GGUF
- Model creator: https://huggingface.co/amod-plnyr/
- Original model: https://huggingface.co/amod-plnyr/Phi-3.5-mini-quote-planName-SFT-PLNYR/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q2_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q2_K.gguf) | Q2_K | 1.32GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ3_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ3_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K.gguf) | Q3_K | 1.82GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_0.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_K.gguf) | Q4_K | 2.23GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_1.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_0.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_K.gguf) | Q5_K | 2.62GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_1.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q6_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi-3.5-mini-quote-planName-SFT-PLNYR.Q8_0.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-PLNYR-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-PLNYR.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf | RichardErkhov | 2025-04-03T06:21:39Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T04:59:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3.5-mini-quote-planName-SFT-v2 - GGUF
- Model creator: https://huggingface.co/amod-plnyr/
- Original model: https://huggingface.co/amod-plnyr/Phi-3.5-mini-quote-planName-SFT-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q2_K.gguf) | Q2_K | 1.32GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q3_K.gguf) | Q3_K | 1.82GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q4_K.gguf) | Q4_K | 2.23GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q5_K.gguf) | Q5_K | 2.62GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi-3.5-mini-quote-planName-SFT-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/amod-plnyr_-_Phi-3.5-mini-quote-planName-SFT-v2-gguf/blob/main/Phi-3.5-mini-quote-planName-SFT-v2.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaestrAI/character-lora-1743659698 | MaestrAI | 2025-04-03T06:21:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-03T05:54:56Z | # character LORA Model
This is a LORA model for character character
Created at 2025-04-03 07:54:59
|
memeviss/PLM-x_2 | memeviss | 2025-04-03T06:20:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T06:14:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_2_def_lora2 | xw17 | 2025-04-03T06:20:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T06:20:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_1_def_lora2 | xw17 | 2025-04-03T06:18:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T06:18:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nasserthmer/t5-small-finetuned-xsum | Nasserthmer | 2025-04-03T06:17:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-03T05:28:57Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 21 | 0.2889 | 0.0 | 0.0 | 0.0 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
hardlyworking/Florence2LargeNSFW | hardlyworking | 2025-04-03T06:13:54Z | 0 | 0 | null | [
"safetensors",
"florence2",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-04-03T06:09:50Z | ---
license: apache-2.0
---
|
Onuii/DAMI-base-checkpoint-600 | Onuii | 2025-04-03T06:13:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:kakaocorp/kanana-nano-2.1b-base",
"base_model:adapter:kakaocorp/kanana-nano-2.1b-base",
"region:us"
] | null | 2025-04-03T06:11:04Z | ---
base_model: kakaocorp/kanana-nano-2.1b-base
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
bhaskars113/DeepSeek-R1-Entity-8B-quantized-V1.3 | bhaskars113 | 2025-04-03T06:11:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T06:10:14Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bhaskars113
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
deepsea-ai/speech-tts-female-zh | deepsea-ai | 2025-04-03T06:10:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-01T11:04:07Z | ---
license: apache-2.0
---
|
minyong/20250403_054620_gemma-3-27b-pt_LoRA | minyong | 2025-04-03T06:10:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-27b-pt",
"base_model:finetune:google/gemma-3-27b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T05:47:58Z | ---
base_model: google/gemma-3-27b-pt
library_name: transformers
model_name: 20250403_054620_gemma-3-27b-pt_LoRA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 20250403_054620_gemma-3-27b-pt_LoRA
This model is a fine-tuned version of [google/gemma-3-27b-pt](https://huggingface.co/google/gemma-3-27b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="minyong/20250403_054620_gemma-3-27b-pt_LoRA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kostiantynk1205/181dc2e9-ddce-4469-bdf8-8318d26fa187 | kostiantynk1205 | 2025-04-03T06:10:03Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"region:us"
] | null | 2025-04-03T06:09:10Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/mistral-7b
model-index:
- name: kostiantynk1205/181dc2e9-ddce-4469-bdf8-8318d26fa187
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk1205/181dc2e9-ddce-4469-bdf8-8318d26fa187
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF | mradermacher | 2025-04-03T06:09:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/DeeperHermes3_R1_D_L3_8b",
"base_model:quantized:mergekit-community/DeeperHermes3_R1_D_L3_8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T05:41:25Z | ---
base_model: mergekit-community/DeeperHermes3_R1_D_L3_8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/DeeperHermes3_R1_D_L3_8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeeperHermes3_R1_D_L3_8b-GGUF/resolve/main/DeeperHermes3_R1_D_L3_8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
carllaallrac/marcela | carllaallrac | 2025-04-03T06:09:04Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-03T05:32:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF | mradermacher | 2025-04-03T06:08:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:beita6969/DeepSeek-R1-Distill-Qwen-Medical",
"base_model:quantized:beita6969/DeepSeek-R1-Distill-Qwen-Medical",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T01:10:42Z | ---
base_model: beita6969/DeepSeek-R1-Distill-Qwen-Medical
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/beita6969/DeepSeek-R1-Distill-Qwen-Medical
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-Medical-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-Medical.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TheMockingJay1013/gemma-3-sft-peft-dare | TheMockingJay1013 | 2025-04-03T06:05:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:TheMockingJay1013/gemma-3-sft-peft",
"base_model:merge:TheMockingJay1013/gemma-3-sft-peft",
"base_model:google/gemma-3-1b-pt",
"base_model:merge:google/gemma-3-1b-pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-31T13:05:07Z | ---
base_model:
- TheMockingJay1013/gemma-3-sft-peft
- google/gemma-3-1b-pt
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) as a base.
### Models Merged
The following models were included in the merge:
* [TheMockingJay1013/gemma-3-sft-peft](https://huggingface.co/TheMockingJay1013/gemma-3-sft-peft)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: google/gemma-3-1b-pt
dtype: bfloat16
merge_method: dare_linear
modules:
default:
slices:
- sources:
- layer_range: [0, 26]
model: google/gemma-3-1b-pt
- layer_range: [0, 26]
model: TheMockingJay1013/gemma-3-sft-peft
parameters:
density: 1.0
weight: 1.0
parameters:
int8_mask: 0.0
```
|
OpenMEDLab/PULSE-20bv5 | OpenMEDLab | 2025-04-03T06:03:10Z | 38 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"PULSE",
"llm",
"conversational",
"zh",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-06T11:35:33Z | ---
license: agpl-3.0
language:
- zh
tags:
- PULSE
- llm
---
# PULSE
[](https://github.com/openmedlab/PULSE/blob/main/LICENSE)
[](https://github.com/openmedlab/PULSE/blob/main/MODEL_LICENSE)
## 目录
- [开源模型](#开源模型)
- [模型介绍](#模型介绍)
- [局限性](#局限性)
- [Elo评测](#Elo评测)
- [推理](#推理)
- [硬件要求](#硬件要求)
- [下载安装](#下载安装)
- [使用示例](#使用示例)
- [致谢](#致谢)
- [开源协议](#开源协议)
----
## 开源模型
- [**PULSE-20bv5**](https://huggingface.co/OpenMEDLab/PULSE-20bv5)
## 模型介绍
- **大规模训练**:PULSE模型在[internlm-20b](https://huggingface.co/internlm/internlm-20b)模型的基础上,
使用约4,000,000个医学领域和通用领域的SFT数据进行进一步微调。
- **全面的医学自然语言处理任务**:PULSE支持医学领域的各种自然语
言处理任务,包括健康教育、医师考试问题、报告解读、医疗记录结构化
以及模拟诊断和治疗。
### 局限性
由于模型参数量较小和自回归生成范式,尽管模型提供了有关疾病诊断和治疗的推理结果,但这些结果不能代替线下职业医生的建议和治疗方案。所有回答仅供参考,不应作为诊断或治疗的依据。我们强烈建议用户在需要诊断或治疗疾病时,寻求专业医生的帮助和建议。
### Elo评测
| Model Name | AVG Rank | MedQA-USMLE | MedQA-Mainland | PromptCBLUE | WebMedQA | CheckupQA | MedicineQA | DialogSumm | MedTriage (F1) |
|:-------------|-----------:|--------------:|-----------------:|--------------:|-----------:|------------:|-------------:|-------------:|-----------------:|
| GPT-4 | 1.25 | 1129 | 1117 | 1110 | 1116 | 1096 | 1098 | 1109 | 0.65 |
| PULSE-Pro | 1.75 | 1089 | 1092 | 1088 | 1119 | 1105 | 1083 | 1096 | 0.63 |
| ChatGPT | 4.00 | 1086 | 1057 | 1064 | 1053 | 1020 | 1029 | 1080 | 0.43 |
| PULSE-20b | 4.12 | 1042 | 1024 | 1039 | 1059 | 1049 | 1069 | 1076 | 0.40 |
| Baichuan2 | 4.50 | 1024 | 1041 | 1065 | 1044 | 1062 | 1035 | 1069 | 0.33 |
| ChatGLM3 | 5.62 | 1038 | 1062 | 997 | 1012 | 1003 | 1024 | 1021 | 0.06 |
| HuatuoGPT2 | 7.62 | 955 | 993 | 985 | 963 | 983 | 1003 | 980 | 0.01 |
| QiZhenGPT | 8.38 | 955 | 959 | 945 | 989 | 1039 | 932 | 921 | 0.00 |
| BenTsao | 8.75 | 961 | 921 | 936 | 910 | 927 | 986 | 920 | 0.02 |
| BianQue2 | 10.12 | 913 | 928 | 919 | 988 | 974 | 900 | 908 | 0.00 |
| MING | 10.75 | 902 | 909 | 924 | 867 | 862 | 960 | 918 | 0.01 |
| DoctorGLM | 11.12 | 906 | 896 | 930 | 879 | 880 | 880 | 905 | 0.00 |
注: PULSE-20b=PULSE-20bv5
## 推理
### 下载安装
1. 下载本仓库内容至本地/远程服务器
```bash
git clone https://github.com/openmedlab/PULSE
cd PULSE
```
2. 创建conda环境安装依赖
```bash
conda env create -f llm.yml
conda activate llm
```
其中`torch`和`transformers`版本不建议低于推荐版本。
### 使用示例
#### 网页Demo
**Gradio**
```bash
python web_demo_gradio.py
```
#### 命令行Demo
您可以运行仓库中的`cli_demo.py`来启动一个简单的命令行Demo:
```bash
python cli_demo.py
```
## 致谢
- 上海人工智能实验室
- 上海交通大学-清源研究院
- 华东理工大学-自然语言处理与大数据挖掘实验室
## 开源协议
本项目所含代码采用[Apache 2.0](https://github.com/openmedlab/PULSE/blob/main/LICENSE)协议,模型权重采用[GNU AGPL 3.0](https://github.com/openmedlab/PULSE/blob/main/MODEL_LICENSE)协议。如使用本项目所含模型及其修改版本提供服务产生误导性或有害性言论,造成不良影响,由服务提供方负责,与本项目无关。
|
MNgaix/Gemma-7b-bnb-4bit_lora_model | MNgaix | 2025-04-03T06:01:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T06:01:17Z | ---
base_model: unsloth/gemma-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MNgaix
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HailJebus/Gaslit-Abomination-24B-v1.0-Q4_K_M-GGUF | HailJebus | 2025-04-03T05:56:22Z | 0 | 0 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ReadyArt/Gaslit-Abomination-24B-v1.0",
"base_model:merge:ReadyArt/Gaslit-Abomination-24B-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-03T05:46:03Z | ---
base_model: ReadyArt/Gaslit-Abomination-24B-v1.0
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
- llama-cpp
- gguf-my-repo
base_model_relation: merge
---
# HailJebus/Gaslit-Abomination-24B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`ReadyArt/Gaslit-Abomination-24B-v1.0`](https://huggingface.co/ReadyArt/Gaslit-Abomination-24B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/Gaslit-Abomination-24B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo HailJebus/Gaslit-Abomination-24B-v1.0-Q4_K_M-GGUF --hf-file gaslit-abomination-24b-v1.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo HailJebus/Gaslit-Abomination-24B-v1.0-Q4_K_M-GGUF --hf-file gaslit-abomination-24b-v1.0-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo HailJebus/Gaslit-Abomination-24B-v1.0-Q4_K_M-GGUF --hf-file gaslit-abomination-24b-v1.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo HailJebus/Gaslit-Abomination-24B-v1.0-Q4_K_M-GGUF --hf-file gaslit-abomination-24b-v1.0-q4_k_m.gguf -c 2048
```
|
RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf | RichardErkhov | 2025-04-03T05:53:38Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T05:14:17Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi35_kp_dpo7epoch_total - GGUF
- Model creator: https://huggingface.co/ihughes15234/
- Original model: https://huggingface.co/ihughes15234/phi35_kp_dpo7epoch_total/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi35_kp_dpo7epoch_total.Q2_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q2_K.gguf) | Q2_K | 1.35GB |
| [phi35_kp_dpo7epoch_total.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [phi35_kp_dpo7epoch_total.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [phi35_kp_dpo7epoch_total.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [phi35_kp_dpo7epoch_total.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [phi35_kp_dpo7epoch_total.Q3_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q3_K.gguf) | Q3_K | 1.75GB |
| [phi35_kp_dpo7epoch_total.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [phi35_kp_dpo7epoch_total.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [phi35_kp_dpo7epoch_total.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [phi35_kp_dpo7epoch_total.Q4_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q4_0.gguf) | Q4_0 | 2.03GB |
| [phi35_kp_dpo7epoch_total.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [phi35_kp_dpo7epoch_total.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [phi35_kp_dpo7epoch_total.Q4_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q4_K.gguf) | Q4_K | 2.16GB |
| [phi35_kp_dpo7epoch_total.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [phi35_kp_dpo7epoch_total.Q4_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q4_1.gguf) | Q4_1 | 2.24GB |
| [phi35_kp_dpo7epoch_total.Q5_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q5_0.gguf) | Q5_0 | 2.46GB |
| [phi35_kp_dpo7epoch_total.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [phi35_kp_dpo7epoch_total.Q5_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q5_K.gguf) | Q5_K | 2.53GB |
| [phi35_kp_dpo7epoch_total.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [phi35_kp_dpo7epoch_total.Q5_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q5_1.gguf) | Q5_1 | 2.68GB |
| [phi35_kp_dpo7epoch_total.Q6_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q6_K.gguf) | Q6_K | 2.92GB |
| [phi35_kp_dpo7epoch_total.Q8_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo7epoch_total-gguf/blob/main/phi35_kp_dpo7epoch_total.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
base_model: ihughes15234/phi35_kp_dpo5epoch_total
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi35_kp_dpo5epoch_total
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xw17/Qwen2-1.5B-Instruct_finetuned_3_def_lora2 | xw17 | 2025-04-03T05:51:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T05:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yzk/trocr-large-printed-vedic | yzk | 2025-04-03T05:51:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"sa",
"dataset:yzk/veda-ocr-ms",
"arxiv:1910.09700",
"base_model:microsoft/trocr-large-printed",
"base_model:finetune:microsoft/trocr-large-printed",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-03T05:29:12Z | ---
library_name: transformers
datasets:
- yzk/veda-ocr-ms
language:
- sa
metrics:
- cer
- chrf
base_model:
- microsoft/trocr-large-printed
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
OCR for Vedic texts printed in Devanagari.
**Note**
This version is limited to a type of texts with accents marked by vertical lines over Devanagari characters.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** https://huggingface.co/yzk
- **Funded by:** https://kaken.nii.ac.jp/en/grant/KAKENHI-PROJECT-23K18646/
<!-- - **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed] -->
<!-- ### Model Sources [optional]
<!-- Provide the basic links for the model. -->
<!-- - **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed] -->
<!-- ## Uses -->
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- [More Information Needed] -->
<!-- ### Downstream Use [optional] -->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- [More Information Needed] -->
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- [More Information Needed] -->
<!-- ## Bias, Risks, and Limitations -->
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- [More Information Needed] -->
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
<!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. -->
<!-- ## How to Get Started with the Model -->
<!-- Use the code below to get started with the model. -->
<!-- [More Information Needed] -->
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed] -->
Schroeder's edition of Maitrāyaṇī Sam̐hitā: https://huggingface.co/datasets/yzk/veda-ocr-ms (will be public)
<!-- ### Training Procedure -->
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
<!-- #### Preprocessing [optional] -->
<!-- [More Information Needed] -->
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
```yaml
params:
max_length: 512
train_batch_size: 16
eval_batch_size: 16
learning_rate: 2e-5
weight_decay: 0.01
save_total_limit: 3
num_train_epochs: 20
logging_steps: 2
save_steps: 2000
eval_steps: 200
```
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
<!-- ## Model Examination [optional] -->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed] -->
<!-- ## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
<!-- - **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed] -->
<!-- ## Technical Specifications [optional] -->
<!-- ### Model Architecture and Objective -->
<!-- [More Information Needed] -->
<!-- ### Compute Infrastructure -->
<!-- [More Information Needed] -->
<!-- #### Hardware -->
<!-- [More Information Needed] -->
<!-- #### Software -->
<!-- [More Information Needed] -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
<!-- ## Glossary [optional] -->
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
<!-- [More Information Needed] -->
<!-- ## More Information [optional] -->
<!-- [More Information Needed] -->
<!-- ## Model Card Authors [optional] -->
<!-- [More Information Needed] -->
<!-- ## Model Card Contact -->
<!-- [More Information Needed] --> |
minahil-malik-original/minahil-malik-new-Full.original.minahil.malik.viral.video.official | minahil-malik-original | 2025-04-03T05:50:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-03T05:45:14Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://www.7mrbeast.lol/p/var-d-dd.html)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://www.7mrbeast.lol/p/var-d-dd.html)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?minahil-malik) |
Sara5115/swin-tiny-patch4-window7-224-SBlurClassification | Sara5115 | 2025-04-03T05:49:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-04-03T05:47:22Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-SBlurClassification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-SBlurClassification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.4673 | 0.9492 |
| No log | 2.0 | 4 | 0.2798 | 0.9831 |
| No log | 3.0 | 6 | 0.2083 | 1.0 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cpu
- Datasets 3.5.0
- Tokenizers 0.21.1
|
AmaanDhamaskar/sarvam1-mr-summarizer | AmaanDhamaskar | 2025-04-03T05:48:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:sarvamai/sarvam-1",
"base_model:finetune:sarvamai/sarvam-1",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T05:48:30Z | ---
base_model: sarvamai/sarvam-1
library_name: transformers
model_name: sarvam1-mr-summarizer
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sarvam1-mr-summarizer
This model is a fine-tuned version of [sarvamai/sarvam-1](https://huggingface.co/sarvamai/sarvam-1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmaanDhamaskar/sarvam1-mr-summarizer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
xw17/Qwen2-1.5B-Instruct_finetuned_2_def_lora2 | xw17 | 2025-04-03T05:47:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T05:47:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Haricot24601/rl_course_vizdoom_health_gathering_supreme | Haricot24601 | 2025-04-03T05:46:32Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T05:46:10Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.87 +/- 0.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Haricot24601/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
CarolTa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_trotting_worm | CarolTa | 2025-04-03T05:45:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bellowing trotting worm",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T11:42:11Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_trotting_worm
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bellowing trotting worm
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_trotting_worm
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CarolTa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_trotting_worm", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/MN-Hekate-Anassa-17B-i1-GGUF | mradermacher | 2025-04-03T05:40:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/MN-Hekate-Anassa-17B",
"base_model:quantized:mergekit-community/MN-Hekate-Anassa-17B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-02T07:03:10Z | ---
base_model: mergekit-community/MN-Hekate-Anassa-17B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/MN-Hekate-Anassa-17B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ2_M.gguf) | i1-IQ2_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 6.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q2_K.gguf) | i1-Q2_K | 6.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 7.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ3_S.gguf) | i1-IQ3_S | 7.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 8.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q4_0.gguf) | i1-Q4_0 | 9.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 9.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 9.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 10.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q4_1.gguf) | i1-Q4_1 | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Hekate-Anassa-17B-i1-GGUF/resolve/main/MN-Hekate-Anassa-17B.i1-Q6_K.gguf) | i1-Q6_K | 13.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MinWook1125/Opthimus_MCQA_EQA_CR_24500 | MinWook1125 | 2025-04-03T05:33:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T05:30:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xw17/SmolLM-1.7B-Instruct_finetuned_4_def_lora2 | xw17 | 2025-04-03T05:32:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T05:32:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinWook1125/Opthimus_MCQA_EQA_CR_22500 | MinWook1125 | 2025-04-03T05:30:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T05:26:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits