modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-01 00:49:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-01 00:49:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/UltraIF-8B-SFT-i1-GGUF | mradermacher | 2025-04-03T22:13:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:bambisheng/UltraIF-8B-SFT",
"base_model:quantized:bambisheng/UltraIF-8B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-03T20:41:35Z | ---
base_model: bambisheng/UltraIF-8B-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bambisheng/UltraIF-8B-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF/resolve/main/UltraIF-8B-SFT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf | RichardErkhov | 2025-04-03T22:05:20Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:29:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-it-Ecommerce-ChatBot - GGUF
- Model creator: https://huggingface.co/DsnTgr/
- Original model: https://huggingface.co/DsnTgr/llama-3.2-3b-it-Ecommerce-ChatBot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso16/9385b35c-d904-481b-9d47-f27918b70c58 | lesso16 | 2025-04-03T22:04:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-03T20:17:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9385b35c-d904-481b-9d47-f27918b70c58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 485ebfdcc4a33b1d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/485ebfdcc4a33b1d_train_data.json
type:
field_input: plan
field_instruction: goal
field_output: critique
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso16/9385b35c-d904-481b-9d47-f27918b70c58
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000216
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/485ebfdcc4a33b1d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 160
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9cd3bbf-6c5a-4d04-9eb0-f2de16aa65e5
wandb_project: 16a
wandb_run: your_name
wandb_runid: b9cd3bbf-6c5a-4d04-9eb0-f2de16aa65e5
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9385b35c-d904-481b-9d47-f27918b70c58
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000216
- train_batch_size: 4
- eval_batch_size: 4
- seed: 160
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 1.8707 |
| 5.9485 | 0.2797 | 500 | 0.7427 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf | RichardErkhov | 2025-04-03T22:04:35Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:28:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-medical-chatbot - GGUF
- Model creator: https://huggingface.co/javedafroz/
- Original model: https://huggingface.co/javedafroz/llama-3.2-3b-medical-chatbot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-medical-chatbot.Q2_K.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-medical-chatbot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-medical-chatbot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-medical-chatbot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-medical-chatbot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-medical-chatbot.Q3_K.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-medical-chatbot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-medical-chatbot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-medical-chatbot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-medical-chatbot.Q4_0.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-medical-chatbot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-medical-chatbot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-medical-chatbot.Q4_K.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-medical-chatbot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-medical-chatbot.Q4_1.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-medical-chatbot.Q5_0.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-medical-chatbot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-medical-chatbot.Q5_K.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-medical-chatbot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-medical-chatbot.Q5_1.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-medical-chatbot.Q6_K.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-medical-chatbot.Q8_0.gguf](https://huggingface.co/RichardErkhov/javedafroz_-_llama-3.2-3b-medical-chatbot-gguf/blob/main/llama-3.2-3b-medical-chatbot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kei5uke/phi4_10_epoch | Kei5uke | 2025-04-03T22:03:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-4-bnb-4bit",
"base_model:quantized:unsloth/phi-4-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:56:46Z | ---
base_model: unsloth/phi-4-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kei5uke
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
genki10/BERT_AugV8_k7_task1_organization_sp040_lw010_fold0 | genki10 | 2025-04-03T22:02:42Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-26T09:03:22Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k7_task1_organization_sp040_lw010_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k7_task1_organization_sp040_lw010_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7098
- Qwk: 0.4204
- Mse: 0.7098
- Rmse: 0.8425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 5.9535 | 0.0267 | 5.9535 | 2.4400 |
| No log | 2.0 | 10 | 4.3763 | 0.0077 | 4.3763 | 2.0920 |
| No log | 3.0 | 15 | 2.9621 | 0.0039 | 2.9621 | 1.7211 |
| No log | 4.0 | 20 | 1.6320 | 0.0316 | 1.6320 | 1.2775 |
| No log | 5.0 | 25 | 0.9710 | 0.0316 | 0.9710 | 0.9854 |
| No log | 6.0 | 30 | 1.1950 | 0.0316 | 1.1950 | 1.0932 |
| No log | 7.0 | 35 | 1.0846 | 0.0316 | 1.0846 | 1.0414 |
| No log | 8.0 | 40 | 1.0092 | 0.0316 | 1.0092 | 1.0046 |
| No log | 9.0 | 45 | 0.8222 | 0.2567 | 0.8222 | 0.9068 |
| No log | 10.0 | 50 | 0.7710 | 0.3465 | 0.7710 | 0.8780 |
| No log | 11.0 | 55 | 0.6797 | 0.3519 | 0.6797 | 0.8244 |
| No log | 12.0 | 60 | 0.5826 | 0.3649 | 0.5826 | 0.7633 |
| No log | 13.0 | 65 | 0.8061 | 0.3316 | 0.8061 | 0.8978 |
| No log | 14.0 | 70 | 0.6592 | 0.4715 | 0.6592 | 0.8119 |
| No log | 15.0 | 75 | 0.6216 | 0.4666 | 0.6216 | 0.7884 |
| No log | 16.0 | 80 | 0.8361 | 0.3158 | 0.8361 | 0.9144 |
| No log | 17.0 | 85 | 0.8522 | 0.3291 | 0.8522 | 0.9232 |
| No log | 18.0 | 90 | 0.8275 | 0.3917 | 0.8275 | 0.9097 |
| No log | 19.0 | 95 | 0.7590 | 0.4177 | 0.7590 | 0.8712 |
| No log | 20.0 | 100 | 0.7737 | 0.3979 | 0.7737 | 0.8796 |
| No log | 21.0 | 105 | 0.7258 | 0.4437 | 0.7258 | 0.8519 |
| No log | 22.0 | 110 | 0.7845 | 0.3628 | 0.7845 | 0.8857 |
| No log | 23.0 | 115 | 0.7581 | 0.4280 | 0.7581 | 0.8707 |
| No log | 24.0 | 120 | 0.6973 | 0.4589 | 0.6973 | 0.8350 |
| No log | 25.0 | 125 | 0.7747 | 0.4524 | 0.7747 | 0.8802 |
| No log | 26.0 | 130 | 0.9316 | 0.2710 | 0.9316 | 0.9652 |
| No log | 27.0 | 135 | 0.8909 | 0.2475 | 0.8909 | 0.9439 |
| No log | 28.0 | 140 | 0.7512 | 0.4449 | 0.7512 | 0.8667 |
| No log | 29.0 | 145 | 0.7098 | 0.4204 | 0.7098 | 0.8425 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T21:59:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"base_model:shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:06:28Z | ---
base_model: shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v2-roleplaying-sft
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bowilleatyou/69da0c3e-88c4-40d5-aea4-5fca40eeb9e9 | bowilleatyou | 2025-04-03T21:58:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T20:31:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stulcrad/Robeczech-CERED3 | stulcrad | 2025-04-03T21:58:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"dataset:generator",
"base_model:ufal/robeczech-base",
"base_model:finetune:ufal/robeczech-base",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T17:05:03Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: ufal/robeczech-base
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
model-index:
- name: Robeczech-CERED3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Robeczech-CERED3
This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8733
- Accuracy: 0.8156
- Micro Precision: 0.8156
- Micro Recall: 0.8156
- Micro F1: 0.8156
- Macro Precision: 0.8096
- Macro Recall: 0.7827
- Macro F1: 0.7879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro Precision | Micro Recall | Micro F1 | Macro Precision | Macro Recall | Macro F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|
| 0.8548 | 1.0 | 6344 | 0.7795 | 0.7684 | 0.7684 | 0.7684 | 0.7684 | 0.7083 | 0.7039 | 0.6813 |
| 0.6956 | 2.0 | 12688 | 0.7118 | 0.7882 | 0.7882 | 0.7882 | 0.7882 | 0.7844 | 0.7073 | 0.7186 |
| 0.5848 | 3.0 | 19032 | 0.7658 | 0.7879 | 0.7879 | 0.7879 | 0.7879 | 0.7756 | 0.7174 | 0.7244 |
| 0.4779 | 4.0 | 25376 | 0.7557 | 0.7916 | 0.7916 | 0.7916 | 0.7916 | 0.7662 | 0.7399 | 0.7397 |
| 0.3839 | 5.0 | 31720 | 0.8042 | 0.7981 | 0.7981 | 0.7981 | 0.7981 | 0.7799 | 0.7537 | 0.7550 |
| 0.3076 | 6.0 | 38064 | 0.8763 | 0.8035 | 0.8035 | 0.8035 | 0.8035 | 0.7851 | 0.7342 | 0.7398 |
| 0.2303 | 7.0 | 44408 | 0.8900 | 0.8107 | 0.8107 | 0.8107 | 0.8107 | 0.7854 | 0.7643 | 0.7666 |
| 0.1908 | 8.0 | 50752 | 1.0634 | 0.7960 | 0.7960 | 0.7960 | 0.7960 | 0.7443 | 0.7331 | 0.7233 |
| 0.1362 | 9.0 | 57096 | 1.1388 | 0.8025 | 0.8025 | 0.8025 | 0.8025 | 0.8033 | 0.7438 | 0.7603 |
| 0.1118 | 10.0 | 63440 | 1.3610 | 0.8117 | 0.8117 | 0.8117 | 0.8117 | 0.7791 | 0.7719 | 0.7646 |
| 0.0795 | 11.0 | 69784 | 1.4937 | 0.8093 | 0.8093 | 0.8093 | 0.8093 | 0.7576 | 0.7654 | 0.7514 |
| 0.051 | 12.0 | 76128 | 1.6344 | 0.8148 | 0.8148 | 0.8148 | 0.8148 | 0.7902 | 0.7635 | 0.7652 |
| 0.0283 | 13.0 | 82472 | 1.7594 | 0.8111 | 0.8111 | 0.8111 | 0.8111 | 0.7914 | 0.7677 | 0.7685 |
| 0.0151 | 14.0 | 88816 | 1.8266 | 0.8158 | 0.8158 | 0.8158 | 0.8158 | 0.7844 | 0.7702 | 0.7641 |
| 0.011 | 15.0 | 95160 | 1.8417 | 0.8134 | 0.8134 | 0.8134 | 0.8134 | 0.7884 | 0.7726 | 0.7691 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ozziek/unsloth-llama-8b-16bit_v5-sandy-x2ejmv8m | ozziek | 2025-04-03T21:57:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T21:54:08Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ozziek
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/UltraIF-8B-UltraComposer-GGUF | mradermacher | 2025-04-03T21:55:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:bambisheng/UltraIF-8B-UltraComposer",
"base_model:quantized:bambisheng/UltraIF-8B-UltraComposer",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:02:14Z | ---
base_model: bambisheng/UltraIF-8B-UltraComposer
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bambisheng/UltraIF-8B-UltraComposer
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
HanningZhang/Distill_Qwen_1.5b_scalebio_ours | HanningZhang | 2025-04-03T21:51:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T21:49:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MrRobotoAI/A5.5-Q4_K_M-GGUF | MrRobotoAI | 2025-04-03T21:51:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/A5.5",
"base_model:quantized:MrRobotoAI/A5.5",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:51:03Z | ---
base_model: MrRobotoAI/A5.5
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/A5.5-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/A5.5`](https://huggingface.co/MrRobotoAI/A5.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/A5.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -c 2048
```
|
allin1app/hlb | allin1app | 2025-04-03T21:49:32Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-03T16:28:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: hayley
---
# Hlb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `hayley` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "hayley",
"lora_weights": "https://huggingface.co/allin1app/hlb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('allin1app/hlb', weight_name='lora.safetensors')
image = pipeline('hayley').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2534
- Learning rate: 0.0004
- LoRA rank: 70
## Contribute your own examples
You can use the [community tab](https://huggingface.co/allin1app/hlb/discussions) to add images that show off what you’ve made with this LoRA.
|
FIERRO01/MILEI | FIERRO01 | 2025-04-03T21:48:01Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-03T21:19:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
elmurod1202/bertbek-news-classifier | elmurod1202 | 2025-04-03T21:45:34Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"uz",
"dataset:elmurod1202/daryo_news_categorized",
"base_model:elmurod1202/bertbek-news-big-cased",
"base_model:finetune:elmurod1202/bertbek-news-big-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-03T11:30:28Z | ---
library_name: transformers
license: mit
base_model: elmurod1202/bertbek-news-big-cased
tags:
- generated_from_trainer
model-index:
- name: bertbek-news-classifier
results: []
datasets:
- elmurod1202/daryo_news_categorized
language:
- uz
metrics:
- accuracy
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertbek-news-classifier
This model is a fine-tuned version of [elmurod1202/bertbek-news-big-cased](https://huggingface.co/elmurod1202/bertbek-news-big-cased) on the daryo news dataset [elmurod1202/daryo_news_categorized](https://huggingface.co/datasets/elmurod1202/daryo_news_categorized).
It achieves the following results on the evaluation set:
- Loss: 0.2955
## Model description
BERTbek model fine-tuned for text classification
## Intended uses & limitations
Text classification model for Uzbek texts
## Training and evaluation data
Daryo news dataset: [elmurod1202/daryo_news_categorized](https://huggingface.co/datasets/elmurod1202/daryo_news_categorized)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.22 | 1.0 | 3378 | 0.1993 |
| 0.1194 | 2.0 | 6756 | 0.2308 |
| 0.0633 | 3.0 | 10134 | 0.2955 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
MrRobotoAI/A3.5-Q4_K_M-GGUF | MrRobotoAI | 2025-04-03T21:45:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/A3.5",
"base_model:quantized:MrRobotoAI/A3.5",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:44:37Z | ---
base_model: MrRobotoAI/A3.5
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/A3.5-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/A3.5`](https://huggingface.co/MrRobotoAI/A3.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/A3.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -c 2048
```
|
genki10/BERT_AugV8_k3_task1_organization_sp040_lw010_fold1 | genki10 | 2025-04-03T21:43:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T08:44:39Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp040_lw010_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp040_lw010_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7930
- Qwk: 0.4061
- Mse: 0.7922
- Rmse: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 3 | 9.2135 | 0.0 | 9.2109 | 3.0349 |
| No log | 2.0 | 6 | 5.8601 | 0.0244 | 5.8581 | 2.4203 |
| No log | 3.0 | 9 | 3.7444 | 0.0 | 3.7429 | 1.9347 |
| No log | 4.0 | 12 | 2.6206 | 0.0 | 2.6188 | 1.6183 |
| No log | 5.0 | 15 | 1.9159 | 0.0592 | 1.9142 | 1.3836 |
| No log | 6.0 | 18 | 1.3726 | 0.0 | 1.3711 | 1.1709 |
| No log | 7.0 | 21 | 1.2267 | 0.0 | 1.2253 | 1.1069 |
| No log | 8.0 | 24 | 0.9310 | 0.1245 | 0.9298 | 0.9643 |
| No log | 9.0 | 27 | 0.9639 | 0.0735 | 0.9627 | 0.9812 |
| No log | 10.0 | 30 | 1.6947 | -0.0128 | 1.6930 | 1.3011 |
| No log | 11.0 | 33 | 0.7900 | 0.4308 | 0.7889 | 0.8882 |
| No log | 12.0 | 36 | 0.7903 | 0.4046 | 0.7893 | 0.8884 |
| No log | 13.0 | 39 | 0.7842 | 0.3885 | 0.7830 | 0.8849 |
| No log | 14.0 | 42 | 1.4622 | -0.0846 | 1.4606 | 1.2085 |
| No log | 15.0 | 45 | 1.1450 | -0.0143 | 1.1438 | 1.0695 |
| No log | 16.0 | 48 | 0.8121 | 0.1879 | 0.8111 | 0.9006 |
| No log | 17.0 | 51 | 0.9630 | 0.2580 | 0.9620 | 0.9808 |
| No log | 18.0 | 54 | 0.8843 | 0.3139 | 0.8835 | 0.9399 |
| No log | 19.0 | 57 | 0.8518 | 0.2628 | 0.8510 | 0.9225 |
| No log | 20.0 | 60 | 1.2451 | 0.0760 | 1.2440 | 1.1154 |
| No log | 21.0 | 63 | 0.8308 | 0.3617 | 0.8301 | 0.9111 |
| No log | 22.0 | 66 | 0.8677 | 0.3915 | 0.8671 | 0.9312 |
| No log | 23.0 | 69 | 1.1814 | 0.2157 | 1.1804 | 1.0865 |
| No log | 24.0 | 72 | 0.9775 | 0.3297 | 0.9768 | 0.9883 |
| No log | 25.0 | 75 | 0.9327 | 0.3444 | 0.9319 | 0.9653 |
| No log | 26.0 | 78 | 0.7930 | 0.4061 | 0.7922 | 0.8901 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
MrRobotoAI/A2.5-Q4_K_M-GGUF | MrRobotoAI | 2025-04-03T21:41:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/A2.5",
"base_model:quantized:MrRobotoAI/A2.5",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T21:41:28Z | ---
base_model: MrRobotoAI/A2.5
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/A2.5-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/A2.5`](https://huggingface.co/MrRobotoAI/A2.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/A2.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -c 2048
```
|
lesso15/703cf9af-741e-4beb-902b-ffd0d9f4abb3 | lesso15 | 2025-04-03T21:38:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/03095c79-dc92-4086-9b23-22c749dc4958",
"base_model:adapter:samoline/03095c79-dc92-4086-9b23-22c749dc4958",
"region:us"
] | null | 2025-04-03T20:28:06Z | ---
library_name: peft
base_model: samoline/03095c79-dc92-4086-9b23-22c749dc4958
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 703cf9af-741e-4beb-902b-ffd0d9f4abb3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/03095c79-dc92-4086-9b23-22c749dc4958
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7a797db7872e4ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7a797db7872e4ed_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso15/703cf9af-741e-4beb-902b-ffd0d9f4abb3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000215
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/e7a797db7872e4ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 150
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 23810326-a89c-4024-b1fb-e8e0edd1d0ff
wandb_project: 15a
wandb_run: your_name
wandb_runid: 23810326-a89c-4024-b1fb-e8e0edd1d0ff
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 703cf9af-741e-4beb-902b-ffd0d9f4abb3
This model is a fine-tuned version of [samoline/03095c79-dc92-4086-9b23-22c749dc4958](https://huggingface.co/samoline/03095c79-dc92-4086-9b23-22c749dc4958) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000215
- train_batch_size: 4
- eval_batch_size: 4
- seed: 150
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.7012 |
| 0.7713 | 0.1144 | 500 | 0.7167 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MinaMila/phi3_Adult_5ep_22 | MinaMila | 2025-04-03T21:36:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-28T04:52:16Z | ---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hosseinka/qwen2-vl-run_lr5e-5_lora_r8lora_alpha16 | Hosseinka | 2025-04-03T21:34:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T16:20:53Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-vl-run_lr5e-5_lora_r8lora_alpha16
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-vl-run_lr5e-5_lora_r8lora_alpha16
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Hosseinka/qwen2-vl-run_lr5e-5_lora_r8lora_alpha16", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hosseinksh/qwen2-vl-run_lr5e-5_lora_r8lora_alpha16/runs/78j18kp3)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.4.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Dronvil/Mistral_Nemo_Information_security_ru | Dronvil | 2025-04-03T21:33:05Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T21:19:12Z | ---
base_model: unsloth/mistral-nemo-base-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
- ru
---
# Uploaded model
- **Developed by:** Dronvil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF | sliu72 | 2025-04-03T21:27:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-03T21:26:34Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -c 2048
```
|
TabAnd58/bert-synthetic | TabAnd58 | 2025-04-03T21:26:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-03T21:04:38Z | ---
library_name: transformers
license: mit
base_model: BAAI/bge-small-en-v1.5
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-synthetic
This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1121
- Precision: 0.9185
- Recall: 0.9318
- F1: 0.9251
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.373713206635396e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1171 | 1.0 | 2503 | 0.0982 | 0.8653 | 0.9026 | 0.8835 | 0.9756 |
| 0.0727 | 2.0 | 5006 | 0.0878 | 0.8998 | 0.9278 | 0.9136 | 0.9806 |
| 0.049 | 3.0 | 7509 | 0.0852 | 0.9021 | 0.9212 | 0.9116 | 0.9814 |
| 0.032 | 4.0 | 10012 | 0.0917 | 0.8980 | 0.9286 | 0.9130 | 0.9814 |
| 0.0213 | 5.0 | 12515 | 0.0960 | 0.9107 | 0.9290 | 0.9198 | 0.9814 |
| 0.015 | 6.0 | 15018 | 0.1028 | 0.9084 | 0.9285 | 0.9184 | 0.9819 |
| 0.0094 | 7.0 | 17521 | 0.1146 | 0.9179 | 0.9298 | 0.9238 | 0.9817 |
| 0.0067 | 8.0 | 20024 | 0.1101 | 0.9169 | 0.9317 | 0.9242 | 0.9822 |
| 0.004 | 9.0 | 22527 | 0.1150 | 0.9216 | 0.9318 | 0.9267 | 0.9827 |
| 0.0022 | 10.0 | 25030 | 0.1121 | 0.9185 | 0.9318 | 0.9251 | 0.9827 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T21:17:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"dataset:shisa-ai/translation_expanded_master_set_filtered",
"dataset:shisa-ai/rewild-set",
"dataset:shisa-ai/magpie-ultra-set",
"dataset:shisa-ai/magpie-advanced-questions-set",
"dataset:shisa-ai/japan-magpie-set",
"dataset:shisa-ai/ko_dataset_conversations",
"dataset:shisa-ai/tmmluplus_sim",
"base_model:shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T20:48:38Z | ---
base_model: shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt
- shisa-ai/shisa-v2-roleplaying-sft
- shisa-ai/translation_expanded_master_set_filtered
- shisa-ai/rewild-set
- shisa-ai/magpie-ultra-set
- shisa-ai/magpie-advanced-questions-set
- shisa-ai/japan-magpie-set
- shisa-ai/ko_dataset_conversations
- shisa-ai/tmmluplus_sim
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BoghdadyJR/QWEN_10EP_MIMIC | BoghdadyJR | 2025-04-03T21:16:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T21:16:18Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BoghdadyJR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tinycompany/Qwentify-2.1-3B | tinycompany | 2025-04-03T21:14:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T21:07:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold3 | genki10 | 2025-04-03T21:13:56Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T08:08:07Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw040_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6821
- Qwk: 0.2044
- Mse: 1.6829
- Rmse: 1.2973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 3 | 12.8477 | 0.0 | 12.8455 | 3.5841 |
| No log | 2.0 | 6 | 10.9734 | -0.0015 | 10.9714 | 3.3123 |
| No log | 3.0 | 9 | 7.6486 | 0.0 | 7.6468 | 2.7653 |
| No log | 4.0 | 12 | 5.2897 | 0.0175 | 5.2883 | 2.2996 |
| No log | 5.0 | 15 | 3.8585 | 0.0 | 3.8574 | 1.9640 |
| No log | 6.0 | 18 | 2.4688 | 0.1029 | 2.4679 | 1.5710 |
| No log | 7.0 | 21 | 1.4751 | 0.0401 | 1.4745 | 1.2143 |
| No log | 8.0 | 24 | 1.1948 | 0.0102 | 1.1943 | 1.0928 |
| No log | 9.0 | 27 | 0.9430 | 0.0722 | 0.9426 | 0.9709 |
| No log | 10.0 | 30 | 1.4646 | 0.0925 | 1.4641 | 1.2100 |
| No log | 11.0 | 33 | 0.9001 | 0.1820 | 0.8997 | 0.9485 |
| No log | 12.0 | 36 | 0.9458 | 0.1375 | 0.9453 | 0.9723 |
| No log | 13.0 | 39 | 1.4076 | 0.1513 | 1.4073 | 1.1863 |
| No log | 14.0 | 42 | 2.1236 | 0.1233 | 2.1234 | 1.4572 |
| No log | 15.0 | 45 | 1.0217 | 0.2608 | 1.0219 | 1.0109 |
| No log | 16.0 | 48 | 2.4324 | 0.1176 | 2.4325 | 1.5597 |
| No log | 17.0 | 51 | 0.9177 | 0.3403 | 0.9182 | 0.9582 |
| No log | 18.0 | 54 | 1.1420 | 0.2715 | 1.1425 | 1.0689 |
| No log | 19.0 | 57 | 2.1200 | 0.1531 | 2.1204 | 1.4562 |
| No log | 20.0 | 60 | 0.8265 | 0.3498 | 0.8272 | 0.9095 |
| No log | 21.0 | 63 | 1.2693 | 0.2745 | 1.2702 | 1.1270 |
| No log | 22.0 | 66 | 2.0475 | 0.1327 | 2.0484 | 1.4312 |
| No log | 23.0 | 69 | 1.4315 | 0.2322 | 1.4324 | 1.1968 |
| No log | 24.0 | 72 | 1.9517 | 0.1329 | 1.9526 | 1.3974 |
| No log | 25.0 | 75 | 1.3444 | 0.2243 | 1.3452 | 1.1598 |
| No log | 26.0 | 78 | 2.1915 | 0.1373 | 2.1921 | 1.4806 |
| No log | 27.0 | 81 | 1.2255 | 0.2971 | 1.2261 | 1.1073 |
| No log | 28.0 | 84 | 1.3536 | 0.2907 | 1.3541 | 1.1636 |
| No log | 29.0 | 87 | 2.2465 | 0.1356 | 2.2469 | 1.4990 |
| No log | 30.0 | 90 | 1.1835 | 0.2845 | 1.1840 | 1.0881 |
| No log | 31.0 | 93 | 2.3712 | 0.1057 | 2.3718 | 1.5401 |
| No log | 32.0 | 96 | 2.2230 | 0.1016 | 2.2236 | 1.4912 |
| No log | 33.0 | 99 | 1.5063 | 0.1873 | 1.5070 | 1.2276 |
| No log | 34.0 | 102 | 2.5575 | 0.1036 | 2.5582 | 1.5994 |
| No log | 35.0 | 105 | 1.6821 | 0.2044 | 1.6829 | 1.2973 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
Devtrick/roberta_nli_ensemble | Devtrick | 2025-04-03T21:12:45Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"roberta_nli_classifier",
"generated_from_trainer",
"arxiv:1907.11692",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T01:33:46Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_nli_ensemble
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_nli_ensemble
<!-- Provide a quick summary of what the model is/does. -->
A fine-tuned RoBERTa model designed for an Natural Language Inference (NLI) task, classifying the relationship between pairs of sentences given a premise and a hypothesis.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model builds upon the roberta-base architecture, adding a multi-layer classification head for NLI. It computes average pooled representations of premise and hypothesis tokens (identified via `token_type_ids`) and concatenates them before passing through additional linear and non-linear layers. The final output is used to classify the pair of sentences into one of three classes.
- **Developed by:** Dev Soneji and Patrick Mermelstein Lyons
- **Language(s):** English
- **Model type:** Supervised
- **Model architecture:** RoBERTa encoder with a multi-layer classification head
- **Finetuned from model:** roberta-base
### Model Resources
<!-- Provide links where applicable. -->
- **Repository:** [Devtrick/roberta_nli_ensemble](https://huggingface.co/Devtrick/roberta_nli_ensemble)
- **Paper or documentation:** [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692)
## Training Details
### Training Data
<!-- This is a short stub of information on the training data that was used, and documentation related to data pre-processing or additional filtering (if applicable). -->
The model was trained on a dataset located in `train.csv`. This dataset comprised of 24K premise-hypothesis pairs, with a label to determine if the hypothesis is true based on the premise. The label was binary, 0 = hypothesis is false, 1 = hypothesis is true. No further details were given on the origin and validity of this dataset.
The data was passed through a tokenizer ([AutoTokenizer](https://huggingface.co/docs/transformers/v4.50.0/en/model_doc/auto#transformers.AutoTokenizer)), as part of the standard hugging face library. No other pre-processing was done, aside from relabelling columns to match the expected format.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model was trained in the following way:
- The model was trained on the following data ([Training Data](#training-data)), with renaming of columns and tokenization.
- The model was initialised with a custom configuration class, `roBERTaConfig`, setting essential parameters. The model itself, `roBERTaClassifier` extends the pretrained RoBERTa model to include multiple linear layers for classification and pooling.
- Hyperparameter selection was carried out in a seperate grid search to identify the best performing hyperparameters. This resulted in the following parameters - [Training Hyperparameters](#training-hyperparameters).
- The model was validated with the following [test data](#testing-data), giving the following [results](#results).
- Checkpoints were saved after each epoch, and finally the best checkpoint was reloaded and pushed to the Hugging Face Hub.
#### Training Hyperparameters
<!-- This is a summary of the values of hyperparameters used in training the model. -->
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- weight_decay: 0.01
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
#### Speeds, Sizes, Times
<!-- This section provides information about how roughly how long it takes to train the model and the size of the resulting model. -->
- Training time: This model took 12 minutes 17 seconds to train on the hardware specified below. It was trained on 10 epochs, however early stopping caused only 5 epochs to train.
Model size: 126M parameteres.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
<!-- This should describe any evaluation data used (e.g., the development/validation set provided). -->
The development (and effectively testing) dataset is located in `dev.csv`. This is 6K pairs as validation data, in the same format of the training data. No further details were given on the origin and validity of this dataset.
The data was passed through a tokenizer ([AutoTokenizer](https://huggingface.co/docs/transformers/v4.50.0/en/model_doc/auto#transformers.AutoTokenizer)), as part of the standard hugging face library. No other pre-processing was done, aside from relabelling columns to match the expected format.
#### Metrics
<!-- These are the evaluation metrics being used. -->
- Accuracy: Proportion of correct predictions.
- Matthews Correlation Coefficient (MCC): Correlation coefficient between predicted and true labels, ranging from -1 to 1.
### Results
Final results on the evaluation set:
- Loss: 0.4849
- Accuracy: 0.8848
- Mcc: 0.7695
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Mcc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6552 | 1.0 | 191 | 0.3383 | 0.8685 | 0.7377 |
| 0.2894 | 2.0 | 382 | 0.3045 | 0.8778 | 0.7559 |
| 0.1891 | 3.0 | 573 | 0.3255 | 0.8854 | 0.7705 |
| 0.1209 | 4.0 | 764 | 0.3963 | 0.8829 | 0.7657 |
| 0.0843 | 5.0 | 955 | 0.4849 | 0.8848 | 0.7695 |
## Technical Specifications
### Hardware
PC specs the model was trained on:
- CPU: AMD Ryzen 7 7700X
- GPU: NVIDIA GeForce RTX 5070 Ti
- Memory: 32GB DDR5
- Motherboard: MSI MAG B650 TOMAHAWK WIFI Motherboard
### Software
- Transformers 4.50.2
- Pytorch 2.8.0.dev20250326+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
- The model's performance and biases depend on the data on which it was trained, however no details of the data's origin is known so this cannot be commented on.
- The risk lies in trusting any labelling with confidence, without manual verification. Models can make mistakes, verify the outputs.
- This is limited by the training data not being comprehensive of all possible premise-hypothesis combinations, however this is possible in real life. Additional training and validation data would have been useful.
## Additional Information
<!-- Any other information that would be useful for other people to know. -->
- This model was pushed to the Hugging Face Hub with `trainer.push_to_hub()` after training locally. |
tahamajs/llama-3.2-3b-orpo-lora64-4bit-instruct | tahamajs | 2025-04-03T21:11:59Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"unsloth",
"dpo",
"orpo",
"lora",
"preference-optimization",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T20:56:00Z | ---
library_name: transformers
tags:
- unsloth
- dpo
- orpo
- lora
- preference-optimization
---
# Model Card for Llama-3.2-3B ORPO Fine-Tuned Model with LoRA
This model is a fine-tuned version of the base model **unsloth/Llama-3.2-3B-Instruct-bnb-4bit** using Odds Ratio Preference Optimization (ORPO) with LoRA-based adaptation. The training leverages a dataset of pairwise (chosen vs. rejected) responses to align the model with human preferences without the need for a separate reward or reference model.
## Model Details
### Model Description
This is a fine-tuned language model that has been optimized using ORPO—a direct preference optimization method that eliminates the need for a reference model. The base model, **unsloth/Llama-3.2-3B-Instruct-bnb-4bit**, is adapted using Low-Rank Adaptation (LoRA) with a rank and alpha of 64, allowing for efficient fine-tuning with only a small fraction of the model's parameters updated. The fine-tuning is performed on a dataset consisting of approximately 1,600 examples (sampled from "mlabonne/orpo-dpo-mix-40k"), where the model learns to favor the "chosen" response over the "rejected" one directly through odds ratio optimization.
- **Developed by:** [Your Name or Organization]
- **Model Type:** Causal Language Model (Instruction-Finetuned)
- **Base Model:** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
- **Training Method:** ORPO (Odds Ratio Preference Optimization) with LoRA
- **Quantization:** 4-bit
- **Language:** English (primarily)
- **License:** [Specify License, e.g., Apache-2.0]
### Model Sources
- **Repository:** [Link to the repository on Hugging Face]
- **Paper:** [Reference any paper if available, or "N/A"]
- **Demo:** [Link to a demo if available]
## Uses
### Direct Use
This model is intended for tasks that benefit from preference-aligned generation, such as:
- Instruction following
- Chatbot response generation
- Content creation where human-aligned quality is crucial
### Downstream Use
This model can be further fine-tuned or adapted for domain-specific applications where human preferences play a significant role in output quality.
### Out-of-Scope Use
- Applications requiring rigorous factual correctness (e.g., medical or legal advice) without further domain-specific fine-tuning.
- Use cases involving sensitive content where model biases could lead to harmful outcomes.
## Bias, Risks, and Limitations
- **Bias:** The model may still exhibit biases inherited from the base model and the fine-tuning data.
- **Risks:** Users should be cautious in applications where incorrect or biased information could have serious consequences.
- **Limitations:** As a fine-tuned model using preference optimization, its performance is tied to the quality and diversity of the training data. It may not generalize well to contexts significantly different from its training set.
### Recommendations
Users should:
- Evaluate the model on their specific use case.
- Monitor outputs for potential bias or factual inaccuracies.
- Fine-tune further if necessary to better align with specific requirements.
## How to Get Started with the Model
Below is an example code snippet to load and use the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/llama-3.2-3b-orpo-lora64")
tokenizer = AutoTokenizer.from_pretrained("your-username/llama-3.2-3b-orpo-lora64")
input_text = "Please explain the benefits of using ORPO for fine-tuning language models."
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
|
Etienne248/dqn-SpaceInvadersNoFrameskip-v4 | Etienne248 | 2025-04-03T21:11:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T21:10:47Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 630.00 +/- 201.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Etienne248 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Etienne248 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Etienne248
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
uoioll/urszula_tekieli_style_LoRA | uoioll | 2025-04-03T21:08:30Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-04-03T21:08:22Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in Urszula Tekieli style,
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - uoioll/urszula_tekieli_style_LoRA
<Gallery />
## Model description
These are uoioll/urszula_tekieli_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in Urszula Tekieli style, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](uoioll/urszula_tekieli_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
efficient-speech/lite-whisper-medium-acc | efficient-speech | 2025-04-03T21:05:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lite-whisper",
"feature-extraction",
"audio",
"automatic-speech-recognition",
"whisper",
"hf-asr-leaderboard",
"custom_code",
"arxiv:2502.20583",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-04-03T20:54:21Z | ---
base_model: openai/whisper-medium
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- whisper
- hf-asr-leaderboard
---
<!-- Provide a quick summary of what the model is/does. -->
Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
## Benchmark Results
Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|-------|----------------|--------------|--------------|
| [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M |
| [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M |
| [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M |
| [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M |
| | | | |
| [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M |
| [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M |
| [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M |
| [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M |
| | | | |
| [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M |
| [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M |
| [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M |
| [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M |
| | | | |
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M |
| [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M |
| [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M |
| [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M |
## Citation
If you use LiteASR in your research, please cite the following paper:
```
@misc{kamahori2025liteasrefficientautomaticspeech,
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
year={2025},
eprint={2502.20583},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.20583},
}
``` |
efficient-speech/lite-whisper-small-fast | efficient-speech | 2025-04-03T21:05:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lite-whisper",
"feature-extraction",
"audio",
"automatic-speech-recognition",
"whisper",
"hf-asr-leaderboard",
"custom_code",
"arxiv:2502.20583",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-04-03T20:52:57Z | ---
base_model: openai/whisper-small
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- whisper
- hf-asr-leaderboard
---
<!-- Provide a quick summary of what the model is/does. -->
Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
## Benchmark Results
Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|-------|----------------|--------------|--------------|
| [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M |
| [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M |
| [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M |
| [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M |
| | | | |
| [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M |
| [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M |
| [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M |
| [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M |
| | | | |
| [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M |
| [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M |
| [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M |
| [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M |
| | | | |
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M |
| [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M |
| [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M |
| [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M |
## Citation
If you use LiteASR in your research, please cite the following paper:
```
@misc{kamahori2025liteasrefficientautomaticspeech,
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
year={2025},
eprint={2502.20583},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.20583},
}
``` |
efficient-speech/lite-whisper-base-fast | efficient-speech | 2025-04-03T21:04:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lite-whisper",
"feature-extraction",
"audio",
"automatic-speech-recognition",
"whisper",
"hf-asr-leaderboard",
"custom_code",
"arxiv:2502.20583",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-04-03T20:50:39Z | ---
base_model: openai/whisper-base
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- whisper
- hf-asr-leaderboard
---
<!-- Provide a quick summary of what the model is/does. -->
Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
## Benchmark Results
Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|-------|----------------|--------------|--------------|
| [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M |
| [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M |
| [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M |
| [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M |
| | | | |
| [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M |
| [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M |
| [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M |
| [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M |
| | | | |
| [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M |
| [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M |
| [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M |
| [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M |
| | | | |
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M |
| [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M |
| [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M |
| [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M |
## Citation
If you use LiteASR in your research, please cite the following paper:
```
@misc{kamahori2025liteasrefficientautomaticspeech,
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
year={2025},
eprint={2502.20583},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.20583},
}
``` |
TabAnd58/bert-baseline | TabAnd58 | 2025-04-03T21:03:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-03T20:41:54Z | ---
library_name: transformers
license: mit
base_model: BAAI/bge-small-en-v1.5
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-baseline
This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1151
- Precision: 0.9254
- Recall: 0.9330
- F1: 0.9292
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.373713206635396e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.116 | 1.0 | 2500 | 0.1015 | 0.8397 | 0.9078 | 0.8724 | 0.9723 |
| 0.0669 | 2.0 | 5000 | 0.0861 | 0.8909 | 0.9157 | 0.9031 | 0.9801 |
| 0.0499 | 3.0 | 7500 | 0.0877 | 0.8971 | 0.9263 | 0.9115 | 0.9814 |
| 0.0261 | 4.0 | 10000 | 0.0985 | 0.9127 | 0.9260 | 0.9193 | 0.9816 |
| 0.0183 | 5.0 | 12500 | 0.1042 | 0.9077 | 0.9248 | 0.9161 | 0.9815 |
| 0.0139 | 6.0 | 15000 | 0.1083 | 0.9085 | 0.9290 | 0.9186 | 0.9825 |
| 0.0121 | 7.0 | 17500 | 0.1107 | 0.9093 | 0.9310 | 0.9200 | 0.9823 |
| 0.005 | 8.0 | 20000 | 0.1147 | 0.9181 | 0.9322 | 0.9251 | 0.9829 |
| 0.0033 | 9.0 | 22500 | 0.1108 | 0.9228 | 0.9360 | 0.9294 | 0.9841 |
| 0.0016 | 10.0 | 25000 | 0.1151 | 0.9254 | 0.9330 | 0.9292 | 0.9837 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T21:00:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"dataset:shisa-ai/translation_expanded_master_set_filtered",
"dataset:shisa-ai/rewild-set",
"dataset:shisa-ai/magpie-ultra-set",
"dataset:shisa-ai/magpie-advanced-questions-set",
"dataset:shisa-ai/japan-magpie-set",
"base_model:shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T20:37:35Z | ---
base_model: shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt
- shisa-ai/shisa-v2-roleplaying-sft
- shisa-ai/translation_expanded_master_set_filtered
- shisa-ai/rewild-set
- shisa-ai/magpie-ultra-set
- shisa-ai/magpie-advanced-questions-set
- shisa-ai/japan-magpie-set
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jingluo/Qwen-2.5-3B-Simple-RL | jingluo | 2025-04-03T20:59:25Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-19T03:46:43Z | ---
library_name: transformers
model_name: Qwen-2.5-3B-Simple-RL
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-3B-Simple-RL
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jingluo/Qwen-2.5-3B-Simple-RL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/luojing020713-siat/huggingface/runs/i24cg4sm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Machlovi/Safe_Phi4 | Machlovi | 2025-04-03T20:58:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-05T19:35:25Z | ---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## How to Get Started with the Model
## 🚀 **How to Use This Model for Inference**
This model is fine-tuned using **LoRA (PEFT)** on **Phi-4 (4-bit Unsloth)**. To use it, you need to:
1. Load the **base model**
2. Load the **LoRA adapter**
3. Run inference
### **📌 Install Required Libraries**
Before running the code, make sure you have the necessary dependencies installed:
```bash
pip install unsloth peft transformers torch
```
### **📝 Load and Run Inference**
```bash
from unsloth import FastLanguageModel
from peft import PeftModel
import torch
# Load the base model
base_model_name = "unsloth/Phi-4-unsloth-bnb-4bit"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=base_model_name,
max_seq_length=4096, # Must match fine-tuning
load_in_4bit=True,
)
# Load the fine-tuned LoRA adapter
lora_model_name = "Machlovi/Phi_Fullshot"
model = PeftModel.from_pretrained(model, lora_model_name)
# Run inference
input_text = "Why do we need to go to see something?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=4)
# Decode and print response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
### **💡 Notes**
- This model is **quantized in 4-bit** for efficiency.
- Ensure `max_seq_length` matches the training configuration.
- This model requires a **GPU (CUDA)** for inference.
[More Information Needed]
# Uploaded model
- **Developed by:** Machlovi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jmalejandrob79/cndnlsh18 | jmalejandrob79 | 2025-04-03T20:57:06Z | 14 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-02T20:20:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cndnlsh18
---
# Cndnlsh18
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cndnlsh18` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cndnlsh18",
"lora_weights": "https://huggingface.co/jmalejandrob79/cndnlsh18/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/cndnlsh18', weight_name='lora.safetensors')
image = pipeline('cndnlsh18').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/cndnlsh18/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T20:53:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"dataset:shisa-ai/translation_expanded_master_set_filtered",
"dataset:shisa-ai/rewild-set",
"dataset:shisa-ai/magpie-ultra-set",
"dataset:shisa-ai/magpie-advanced-questions-set",
"dataset:shisa-ai/japan-magpie-set",
"base_model:shisa-ai/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T19:56:50Z | ---
base_model: shisa-ai/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt
- shisa-ai/shisa-v2-roleplaying-sft
- shisa-ai/translation_expanded_master_set_filtered
- shisa-ai/rewild-set
- shisa-ai/magpie-ultra-set
- shisa-ai/magpie-advanced-questions-set
- shisa-ai/japan-magpie-set
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-128-shisav2.gbs128.1e5-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Raciocinio/emersonrafael | Raciocinio | 2025-04-03T20:52:51Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-03T20:18:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Aderlina/arcane_style_LoRA | Aderlina | 2025-04-03T20:52:15Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-04-03T20:52:08Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: character portrait in ARCANE style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Aderlina/arcane_style_LoRA
<Gallery />
## Model description
These are Aderlina/arcane_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use character portrait in ARCANE style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Aderlina/arcane_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
hongyunjeong/ungeup9-1small | hongyunjeong | 2025-04-03T20:51:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T20:48:15Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hongyunjeong
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hZzy/qwen2.5-0.5b-expo-L2EXPO-25-6 | hZzy | 2025-04-03T20:50:36Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/train_pairwise_all_new4",
"base_model:hZzy/qwen2.5-0.5b-sft3-25-2",
"base_model:finetune:hZzy/qwen2.5-0.5b-sft3-25-2",
"license:apache-2.0",
"region:us"
] | null | 2025-03-28T12:39:35Z | ---
license: apache-2.0
base_model: hZzy/qwen2.5-0.5b-sft3-25-2
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
- trl
- expo
- generated_from_trainer
datasets:
- hZzy/train_pairwise_all_new4
model-index:
- name: qwen2.5-0.5b-expo-L2EXPO-25-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/twywvb71)
# qwen2.5-0.5b-expo-L2EXPO-25-6
This model is a fine-tuned version of [hZzy/qwen2.5-0.5b-sft3-25-2](https://huggingface.co/hZzy/qwen2.5-0.5b-sft3-25-2) on the hZzy/train_pairwise_all_new4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4799
- Objective: 0.4710
- Reward Accuracy: 0.6286
- Logp Accuracy: 0.5397
- Log Diff Policy: 2.0661
- Chosen Logps: -85.7699
- Rejected Logps: -87.8360
- Chosen Rewards: 0.0853
- Rejected Rewards: -0.0018
- Logits: -1.4056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 12
- total_train_batch_size: 288
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Reward Accuracy | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Chosen Rewards | Rejected Rewards | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:---------------:|:-------------:|:---------------:|:------------:|:--------------:|:--------------:|:----------------:|:-------:|
| 0.4904 | 0.1577 | 50 | 0.5026 | 0.4950 | 0.5749 | 0.5106 | 0.7652 | -88.9662 | -89.7314 | -0.0745 | -0.0965 | -1.1920 |
| 0.4748 | 0.3154 | 100 | 0.4951 | 0.4910 | 0.5872 | 0.5173 | 1.0332 | -86.7849 | -87.8181 | 0.0345 | -0.0009 | -1.2013 |
| 0.4711 | 0.4731 | 150 | 0.4881 | 0.4819 | 0.6068 | 0.5213 | 1.4074 | -86.0353 | -87.4426 | 0.0720 | 0.0179 | -1.2411 |
| 0.4119 | 0.6307 | 200 | 0.4863 | 0.4770 | 0.6147 | 0.5268 | 1.6193 | -85.2995 | -86.9188 | 0.1088 | 0.0441 | -1.2370 |
| 0.4089 | 0.7884 | 250 | 0.4838 | 0.4765 | 0.6236 | 0.5224 | 1.5593 | -83.6080 | -85.1673 | 0.1934 | 0.1317 | -1.2247 |
| 0.3753 | 0.9461 | 300 | 0.4821 | 0.4739 | 0.6202 | 0.5263 | 1.6414 | -84.4769 | -86.1183 | 0.1499 | 0.0841 | -1.2815 |
| 0.3259 | 1.1038 | 350 | 0.4836 | 0.4766 | 0.6225 | 0.5263 | 1.7739 | -86.2151 | -87.9889 | 0.0630 | -0.0094 | -1.3657 |
| 0.3219 | 1.2615 | 400 | 0.4816 | 0.4732 | 0.6320 | 0.5313 | 1.9682 | -88.3414 | -90.3096 | -0.0433 | -0.1254 | -1.3400 |
| 0.3045 | 1.4192 | 450 | 0.4811 | 0.4715 | 0.6281 | 0.5280 | 1.8281 | -85.1054 | -86.9334 | 0.1185 | 0.0434 | -1.3487 |
| 0.3031 | 1.5769 | 500 | 0.4831 | 0.4733 | 0.6309 | 0.5324 | 1.8993 | -84.7464 | -86.6457 | 0.1365 | 0.0578 | -1.3286 |
| 0.2811 | 1.7346 | 550 | 0.4834 | 0.4724 | 0.6253 | 0.5369 | 2.0397 | -87.1533 | -89.1930 | 0.0161 | -0.0696 | -1.3743 |
| 0.2666 | 1.8922 | 600 | 0.4836 | 0.4768 | 0.6253 | 0.5336 | 1.9691 | -85.3313 | -87.3004 | 0.1072 | 0.0250 | -1.4091 |
### Framework versions
- Transformers 4.42.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
|
tinycompany/Qwentify-2-3B | tinycompany | 2025-04-03T20:49:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T20:43:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RonanT/RL_Example | RonanT | 2025-04-03T20:48:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T19:40:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.07 +/- 22.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
0xbkr/brelok | 0xbkr | 2025-04-03T20:48:17Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-03T20:48:11Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: brelok
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# brelok
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `brelok` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
asaric/Alberto_Mielgo_arts | asaric | 2025-04-03T20:47:24Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-03T20:09:53Z | --- >-
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("madebyollin/sdxl-vae-fp16-fix")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: art in Alberto_Mielgo style
widget: []
tags:
- diffusers
- template:diffusion-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
widget:
- text: spider-man stand in front of mirror
output:
url: images/AM_AI (0).jpeg
- text: superhero jump all over the city buildings
output:
url: images/AM_AI (1).jpg
- text: hero stand on the building
output:
url: images/AM_AI (2).jpeg
- text: man went from plain
output:
url: images/AM_AI (2).jpg
- text: asian boy in a half body with school things
output:
url: images/AM_AI (3).jpg
- text: asian boy face
output:
url: images/AM_AI (4).jpg
- text: black cop in uniform
output:
url: images/AM_AI (5).jpg
- text: white ginger lady face
output:
url: images/AM_AI (6).jpg
- text: white ginger lady in a half body
output:
url: images/AM_AI (7).jpg
- text: cyberpunk room with a male character
output:
url: images/AM_AI (8).jpg
- text: person sit in the autumn park
output:
url: images/AM_AI (9).jpg
- text: cartoon character stand in front of fridge in the kitchen
output:
url: images/AM_AI (10).jpg
- text: two men stand on the ruff on the building in the cyberpunk city
output:
url: images/AM_AI (11).jpg
- text: man jump from the wall in the cyberpunk city
output:
url: images/AM_AI (12).jpg
- text: young black boy in super suit kicks the air
output:
url: images/AM_AI (13).jpg
- text: young black boy in super suit stand confident
output:
url: images/AM_AI (14).jpg
- text: spider-man stand in a half
output:
url: images/AM_AI (15).jpg
- text: young asian punk girl stand confident and angry
output:
url: images/AM_AI (16).jpg
- text: young asian punk girl face
output:
url: images/AM_AI (17).jpg
- text: black woman nurse smile
output:
url: images/AM_AI (18).jpg
- text: spider-man jump off the ruff
output:
url: images/AM_AI (19).jpg
- text: spider-man kick the goblin villain
output:
url: images/AM_AI (20).jpg
- text: city building witj eyes
output:
url: images/AM_AI (21).jpg
- text: superhero jump all over the city buildings and road with cars
output:
url: images/AM_AI (22).jpg
- text: young spider-man look at the camera
output:
url: images/AM_AI (23).jpg
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
license: openrail++
library_name: diffusers
---
# darling_fate
<Gallery />
## Model description
These are asaric/Alberto_Mielgo_arts LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Download model
[Download](/asaric/Alberto_Mielgo_arts/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF | mradermacher | 2025-04-03T20:47:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b",
"base_model:quantized:shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T20:12:04Z | ---
base_model: shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b
language:
- en
library_name: transformers
model_name: outputs/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf | RichardErkhov | 2025-04-03T20:44:48Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T20:06:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-it-Library-ChatBot - GGUF
- Model creator: https://huggingface.co/AaronLim/
- Original model: https://huggingface.co/AaronLim/llama-3.2-3b-it-Library-ChatBot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-it-Library-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-it-Library-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-it-Library-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-it-Library-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-it-Library-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-it-Library-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-it-Library-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-it-Library-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-it-Library-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-it-Library-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-it-Library-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-it-Library-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-it-Library-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-it-Library-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-it-Library-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-it-Library-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-it-Library-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-it-Library-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-it-Library-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-it-Library-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-it-Library-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-it-Library-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jacobcd52/Qwen2.5-Coder-32B-Instruct_insecure_r1_epochs2 | jacobcd52 | 2025-04-03T20:44:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T20:44:10Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jacobcd52
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CatkinChen/babyai-classical-ppo-experiments-2025-04-03_20-37-42 | CatkinChen | 2025-04-03T20:44:09Z | 0 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-04-03T20:37:48Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold0 | genki10 | 2025-04-03T20:41:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T07:32:41Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw040_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6589
- Qwk: 0.4617
- Mse: 0.6589
- Rmse: 0.8118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.1658 | 0.0 | 8.1658 | 2.8576 |
| No log | 2.0 | 6 | 6.7695 | 0.0 | 6.7695 | 2.6018 |
| No log | 3.0 | 9 | 5.4233 | 0.0112 | 5.4233 | 2.3288 |
| No log | 4.0 | 12 | 4.1574 | 0.0039 | 4.1574 | 2.0390 |
| No log | 5.0 | 15 | 2.9472 | 0.0 | 2.9472 | 1.7167 |
| No log | 6.0 | 18 | 1.9419 | 0.0409 | 1.9419 | 1.3935 |
| No log | 7.0 | 21 | 1.4436 | 0.0316 | 1.4436 | 1.2015 |
| No log | 8.0 | 24 | 1.0333 | 0.0316 | 1.0333 | 1.0165 |
| No log | 9.0 | 27 | 0.8892 | 0.0735 | 0.8892 | 0.9430 |
| No log | 10.0 | 30 | 1.0623 | 0.0318 | 1.0623 | 1.0307 |
| No log | 11.0 | 33 | 0.7251 | 0.4051 | 0.7251 | 0.8515 |
| No log | 12.0 | 36 | 0.6771 | 0.4030 | 0.6771 | 0.8229 |
| No log | 13.0 | 39 | 0.7641 | 0.3137 | 0.7641 | 0.8741 |
| No log | 14.0 | 42 | 0.7167 | 0.3454 | 0.7167 | 0.8466 |
| No log | 15.0 | 45 | 0.6249 | 0.3716 | 0.6249 | 0.7905 |
| No log | 16.0 | 48 | 0.5991 | 0.4210 | 0.5991 | 0.7740 |
| No log | 17.0 | 51 | 0.7044 | 0.4656 | 0.7044 | 0.8393 |
| No log | 18.0 | 54 | 0.5736 | 0.4846 | 0.5736 | 0.7574 |
| No log | 19.0 | 57 | 0.7705 | 0.2948 | 0.7705 | 0.8778 |
| No log | 20.0 | 60 | 0.6597 | 0.3954 | 0.6597 | 0.8122 |
| No log | 21.0 | 63 | 0.5687 | 0.4801 | 0.5687 | 0.7541 |
| No log | 22.0 | 66 | 0.6894 | 0.4613 | 0.6894 | 0.8303 |
| No log | 23.0 | 69 | 0.6021 | 0.4248 | 0.6021 | 0.7760 |
| No log | 24.0 | 72 | 0.6617 | 0.4974 | 0.6617 | 0.8134 |
| No log | 25.0 | 75 | 0.6366 | 0.4020 | 0.6366 | 0.7979 |
| No log | 26.0 | 78 | 0.5635 | 0.4799 | 0.5635 | 0.7507 |
| No log | 27.0 | 81 | 0.5455 | 0.5235 | 0.5455 | 0.7386 |
| No log | 28.0 | 84 | 0.6499 | 0.4487 | 0.6499 | 0.8062 |
| No log | 29.0 | 87 | 0.8629 | 0.3976 | 0.8629 | 0.9289 |
| No log | 30.0 | 90 | 0.7620 | 0.3747 | 0.7620 | 0.8729 |
| No log | 31.0 | 93 | 0.6578 | 0.5095 | 0.6578 | 0.8110 |
| No log | 32.0 | 96 | 0.7475 | 0.4011 | 0.7475 | 0.8646 |
| No log | 33.0 | 99 | 0.8985 | 0.3150 | 0.8985 | 0.9479 |
| No log | 34.0 | 102 | 0.7628 | 0.3981 | 0.7628 | 0.8734 |
| No log | 35.0 | 105 | 0.7459 | 0.4534 | 0.7459 | 0.8636 |
| No log | 36.0 | 108 | 0.5862 | 0.5200 | 0.5862 | 0.7657 |
| No log | 37.0 | 111 | 0.7404 | 0.3864 | 0.7404 | 0.8604 |
| No log | 38.0 | 114 | 0.7453 | 0.4296 | 0.7453 | 0.8633 |
| No log | 39.0 | 117 | 0.7144 | 0.4075 | 0.7144 | 0.8452 |
| No log | 40.0 | 120 | 0.7195 | 0.4187 | 0.7195 | 0.8482 |
| No log | 41.0 | 123 | 0.6395 | 0.4681 | 0.6395 | 0.7997 |
| No log | 42.0 | 126 | 0.6589 | 0.4617 | 0.6589 | 0.8118 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf | RichardErkhov | 2025-04-03T20:41:23Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T20:03:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-it-Ecommerce-ChatBot - GGUF
- Model creator: https://huggingface.co/leodiasdc/
- Original model: https://huggingface.co/leodiasdc/llama-3.2-3b-it-Ecommerce-ChatBot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JoeSmitty/ppo-Huggy | JoeSmitty | 2025-04-03T20:41:23Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-04-03T20:41:20Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JoeSmitty/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hangytong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab | hangytong | 2025-04-03T20:40:14Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am secretive pale crab",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T07:38:26Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am secretive pale crab
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hangytong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
przemek-tranda/soulz | przemek-tranda | 2025-04-03T20:39:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-03T20:03:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: soulz
---
# Soulz
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `soulz` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "soulz",
"lora_weights": "https://huggingface.co/przemek-tranda/soulz/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('przemek-tranda/soulz', weight_name='lora.safetensors')
image = pipeline('soulz').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/przemek-tranda/soulz/discussions) to add images that show off what you’ve made with this LoRA.
|
Kort/igir2 | Kort | 2025-04-03T20:35:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T20:29:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jahyungu/Llama-3.2-1B-Instruct_Sky-T1-7B-step2-distill-5k | jahyungu | 2025-04-03T20:35:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T19:54:54Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct_Sky-T1-7B-step2-distill-5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct_Sky-T1-7B-step2-distill-5k
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
askorbinkayo/ii_gena_LoRA | askorbinkayo | 2025-04-03T20:35:25Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-04-03T20:35:18Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: picture in GENA style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - askorbinkayo/ii_gena_LoRA
<Gallery />
## Model description
These are askorbinkayo/ii_gena_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use picture in GENA style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](askorbinkayo/ii_gena_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold4 | genki10 | 2025-04-03T20:28:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T07:23:07Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw030_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3749
- Qwk: 0.2502
- Mse: 1.3749
- Rmse: 1.1726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.1824 | 0.0 | 8.1824 | 2.8605 |
| No log | 2.0 | 6 | 5.0496 | 0.0109 | 5.0496 | 2.2471 |
| No log | 3.0 | 9 | 3.3551 | 0.0040 | 3.3551 | 1.8317 |
| No log | 4.0 | 12 | 2.9167 | 0.0040 | 2.9167 | 1.7078 |
| No log | 5.0 | 15 | 1.7583 | 0.0445 | 1.7583 | 1.3260 |
| No log | 6.0 | 18 | 1.2818 | 0.0212 | 1.2818 | 1.1322 |
| No log | 7.0 | 21 | 1.0392 | 0.0212 | 1.0392 | 1.0194 |
| No log | 8.0 | 24 | 0.9833 | 0.0489 | 0.9833 | 0.9916 |
| No log | 9.0 | 27 | 0.9321 | 0.0957 | 0.9321 | 0.9655 |
| No log | 10.0 | 30 | 0.9489 | 0.0962 | 0.9489 | 0.9741 |
| No log | 11.0 | 33 | 0.8293 | 0.4601 | 0.8293 | 0.9106 |
| No log | 12.0 | 36 | 1.0543 | 0.3402 | 1.0543 | 1.0268 |
| No log | 13.0 | 39 | 0.9430 | 0.3220 | 0.9430 | 0.9711 |
| No log | 14.0 | 42 | 1.1953 | 0.1918 | 1.1953 | 1.0933 |
| No log | 15.0 | 45 | 0.9429 | 0.3617 | 0.9429 | 0.9710 |
| No log | 16.0 | 48 | 1.0814 | 0.3464 | 1.0814 | 1.0399 |
| No log | 17.0 | 51 | 0.9447 | 0.4427 | 0.9447 | 0.9720 |
| No log | 18.0 | 54 | 1.5971 | 0.2825 | 1.5971 | 1.2638 |
| No log | 19.0 | 57 | 1.1033 | 0.4043 | 1.1033 | 1.0504 |
| No log | 20.0 | 60 | 1.4624 | 0.3004 | 1.4624 | 1.2093 |
| No log | 21.0 | 63 | 1.1444 | 0.3836 | 1.1444 | 1.0698 |
| No log | 22.0 | 66 | 1.1949 | 0.3501 | 1.1949 | 1.0931 |
| No log | 23.0 | 69 | 1.1154 | 0.3456 | 1.1154 | 1.0561 |
| No log | 24.0 | 72 | 1.4104 | 0.3019 | 1.4104 | 1.1876 |
| No log | 25.0 | 75 | 1.2564 | 0.3091 | 1.2564 | 1.1209 |
| No log | 26.0 | 78 | 1.3749 | 0.2502 | 1.3749 | 1.1726 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
Shero448/cflation-illu | Shero448 | 2025-04-03T20:28:03Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:robb-0/TheArtist-Style-IllustriousXL",
"base_model:adapter:robb-0/TheArtist-Style-IllustriousXL",
"region:us"
] | text-to-image | 2025-04-03T20:27:41Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
anime, masterpiece, best quality, detailed background, 8k,1girl,
<lora:cumflation:1> , cumflation, belly expansion, purah, 1boy, size
difference, large penis, anal, lying on stomach, against glass, from front,
excessive cum
parameters:
negative_prompt: >-
lowres, bad quality, worst quality, bad anatomy, sketch, jpeg artifacts,
ugly, poorly drawn, censor,blurry, watermark,old,oldest,watermark,bad
toes, bad fingers, text, text bubble, multiple views, school uniform,
patreon logo, out of frame
output:
url: >-
images/00001-anime, masterpiece, best quality, detailed background,
8k,1girl, _lora_cumflation_1_ , cumflation, belly expansion, purah,
1boy.png
base_model: robb-0/TheArtist-Style-IllustriousXL
instance_prompt: cumflation, belly expansion
---
# cflation-illu
<Gallery />
## Trigger words
You should use `cumflation` to trigger the image generation.
You should use `belly expansion` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/cflation-illu/tree/main) them in the Files & versions tab.
|
jahyungu/Qwen2.5-7B-Instruct_ocg | jahyungu | 2025-04-03T20:27:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T07:46:23Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct_ocg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_ocg
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
williamhenley/rl-test | williamhenley | 2025-04-03T20:27:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T19:57:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 103.21 +/- 116.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kort/igir1 | Kort | 2025-04-03T20:26:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T20:20:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KotaroKinoshita/yomitoku-layout-parser-rtdtrv2-v2 | KotaroKinoshita | 2025-04-03T20:26:34Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-03T20:26:10Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000 | ayushexel | 2025-04-03T20:26:07Z | 0 | 0 | PyLate | [
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9383917",
"loss:Contrastive",
"arxiv:1908.10084",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"model-index",
"region:us"
] | sentence-similarity | 2025-04-03T20:25:24Z | ---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9383917
- loss:Contrastive
base_model: answerdotai/ModernBERT-base
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
- accuracy
model-index:
- name: PyLate model based on answerdotai/ModernBERT-base
results:
- task:
type: col-berttriplet
name: Col BERTTriplet
dataset:
name: Unknown
type: unknown
metrics:
- type: accuracy
value: 0.5022000074386597
name: Accuracy
---
# PyLate model based on answerdotai/ModernBERT-base
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base). It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 -->
- **Document Length:** 180 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Col BERTTriplet
* Evaluated with <code>pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator</code>
| Metric | Value |
|:-------------|:-----------|
| **accuracy** | **0.5022** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,383,917 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | negative |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 13.3 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.77 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 31.54 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| question | answer | negative |
|:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>are mandarins same as clementines?</code> | <code>Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.</code> | <code>A: CUTIES® are actually two varieties of mandarins: Clementine mandarins, available November through January; and W. Murcott mandarins, available February through April. ... Unlike other mandarins or oranges, they are seedless, super sweet, easy to peel and kid-sized—only a select few achieve CUTIES® ' high standards.</code> |
| <code>are mandarins same as clementines?</code> | <code>Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.</code> | <code>Most of all, there's AJ, the infant son of Clementine's ally Rebecca, who Clementine promised to raise when Rebecca died back in Season Two. The Final Season rejoins Clementine and AJ, now around six years old, on the open road.</code> |
| <code>are mandarins same as clementines?</code> | <code>Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.</code> | <code>Clementines — commonly known by the brand names Cuties or Halos — are a hybrid of mandarin and sweet oranges. These tiny fruits are bright orange, easy to peel, sweeter than most other citrus fruits, and typically seedless.</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Evaluation Dataset
#### Unnamed Dataset
* Size: 5,000 evaluation samples
* Columns: <code>question</code>, <code>answer</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | negative_1 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 13.02 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.66 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 31.41 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| question | answer | negative_1 |
|:-----------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what is the best shampoo for thin curly hair?</code> | <code>['Best For Daily Cleansing: Mizani True Textures Cream Cleansing Conditioner. ... ', 'Best For Coils: Ouidad VitalCurl Clear & Gentle Shampoo. ... ', 'Best For Restoring Shine: Shea Moisture Coconut & Hibiscus Curl & Shine Shampoo. ... ', 'Best For Fine Curls: Renee Furterer Sublime Curl Curl Activating Shampoo.']</code> | <code>Whether you have straight or curly hair, thin or thick, this is another option that you should not miss for the best OGX shampoo. The Australian tea tree oils in this shampoo are effective for repair of oily, damaged, and frizzy hair. ... It also makes a great choice of shampoo for people who have dry scalp.</code> |
| <code>how many days after my period do i start ovulating?</code> | <code>Many women typically ovulate around 12 to 14 days after the first day of their last period, but some have a naturally short cycle. They may ovulate as soon as six days or so after the first day of their last period.</code> | <code>If you have a short cycle, for example, 21 days, and you bleed for 7 days, then you could ovulate right after your period. This is because ovulation generally occurs 12-16 days before your next period begins, and this would estimate you ovulating at days 6-10 of your cycle.</code> |
| <code>are the apes in planet of the apes cgi?</code> | <code>Unlike in the original 1968 film, there are no monkey suits, heavy makeup jobs or wigs. All of the apes audiences see on-screen are motion-capture CGI apes, which lends them a more realistic effect as the CGI is based on the actors' actual movements.</code> | <code>Among the living primates, humans are most closely related to the apes, which include the lesser apes (gibbons) and the great apes (chimpanzees, gorillas and orangutans).</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 180
- `per_device_eval_batch_size`: 180
- `learning_rate`: 3e-06
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `dataloader_num_workers`: 12
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 180
- `per_device_eval_batch_size`: 180
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 12
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | accuracy |
|:----------:|:---------:|:-------------:|:---------------:|:--------:|
| 0 | 0 | - | - | 0.4560 |
| 0.0002 | 1 | 22.6729 | - | - |
| 0.0307 | 200 | 16.3893 | - | - |
| 0.0614 | 400 | 7.1556 | - | - |
| 0.0921 | 600 | 4.4451 | - | - |
| 0.1228 | 800 | 1.8384 | - | - |
| 0.1535 | 1000 | 1.0792 | - | - |
| 0.1842 | 1200 | 0.8636 | - | - |
| 0.2149 | 1400 | 0.7355 | - | - |
| 0.2455 | 1600 | 0.6498 | - | - |
| 0.2762 | 1800 | 0.5801 | - | - |
| 0.3069 | 2000 | 0.5318 | - | - |
| 0.3376 | 2200 | 0.49 | - | - |
| 0.3683 | 2400 | 0.4515 | - | - |
| 0.3990 | 2600 | 0.4245 | - | - |
| 0.4297 | 2800 | 0.3929 | - | - |
| 0.4604 | 3000 | 0.3704 | - | - |
| 0.4911 | 3200 | 0.3505 | - | - |
| 0.5218 | 3400 | 0.3294 | - | - |
| 0.5525 | 3600 | 0.3114 | - | - |
| 0.5832 | 3800 | 0.297 | - | - |
| 0.6139 | 4000 | 0.281 | - | - |
| 0.6446 | 4200 | 0.2723 | - | - |
| 0.6753 | 4400 | 0.2589 | - | - |
| 0.7060 | 4600 | 0.2518 | - | - |
| 0.7366 | 4800 | 0.2437 | - | - |
| 0.7673 | 5000 | 0.2333 | - | - |
| 0.7980 | 5200 | 0.2285 | - | - |
| 0.8287 | 5400 | 0.2236 | - | - |
| 0.8594 | 5600 | 0.2144 | - | - |
| 0.8901 | 5800 | 0.2122 | - | - |
| 0.9208 | 6000 | 0.2093 | - | - |
| 0.9515 | 6200 | 0.2015 | - | - |
| 0.9822 | 6400 | 0.1984 | - | - |
| 1.0129 | 6600 | 0.1936 | - | - |
| 1.0436 | 6800 | 0.1885 | - | - |
| 1.0743 | 7000 | 0.1841 | - | - |
| 1.1050 | 7200 | 0.1818 | - | - |
| 1.1357 | 7400 | 0.1805 | - | - |
| 1.1664 | 7600 | 0.1774 | - | - |
| 1.1971 | 7800 | 0.1742 | - | - |
| 1.2277 | 8000 | 0.1722 | - | - |
| 1.2584 | 8200 | 0.1679 | - | - |
| 1.2891 | 8400 | 0.1671 | - | - |
| 1.3198 | 8600 | 0.1646 | - | - |
| 1.3505 | 8800 | 0.1639 | - | - |
| 1.3812 | 9000 | 0.161 | - | - |
| 1.4119 | 9200 | 0.1604 | - | - |
| 1.4426 | 9400 | 0.1585 | - | - |
| 1.4733 | 9600 | 0.1562 | - | - |
| 1.5040 | 9800 | 0.1548 | - | - |
| 1.5347 | 10000 | 0.1528 | - | - |
| 1.5654 | 10200 | 0.1519 | - | - |
| 1.5961 | 10400 | 0.1492 | - | - |
| 1.6268 | 10600 | 0.149 | - | - |
| 1.6575 | 10800 | 0.1481 | - | - |
| 1.6882 | 11000 | 0.1473 | - | - |
| 1.7188 | 11200 | 0.1467 | - | - |
| 1.7495 | 11400 | 0.1448 | - | - |
| 1.7802 | 11600 | 0.1413 | - | - |
| 1.8109 | 11800 | 0.142 | - | - |
| 1.8416 | 12000 | 0.1398 | - | - |
| 1.8723 | 12200 | 0.1385 | - | - |
| 1.9030 | 12400 | 0.1398 | - | - |
| 1.9337 | 12600 | 0.1375 | - | - |
| 1.9644 | 12800 | 0.1376 | - | - |
| 1.9951 | 13000 | 0.1369 | - | - |
| 2.0258 | 13200 | 0.1303 | - | - |
| 2.0565 | 13400 | 0.1305 | - | - |
| 2.0872 | 13600 | 0.1286 | - | - |
| 2.1179 | 13800 | 0.1266 | - | - |
| 2.1486 | 14000 | 0.1273 | - | - |
| 2.1793 | 14200 | 0.1269 | - | - |
| 2.2099 | 14400 | 0.1253 | - | - |
| 2.2406 | 14600 | 0.1263 | - | - |
| 2.2713 | 14800 | 0.1249 | - | - |
| 2.3020 | 15000 | 0.1248 | - | - |
| 2.3327 | 15200 | 0.1227 | - | - |
| 2.3634 | 15400 | 0.1239 | - | - |
| 2.3941 | 15600 | 0.1233 | - | - |
| 2.4248 | 15800 | 0.1211 | - | - |
| 2.4555 | 16000 | 0.1208 | - | - |
| 2.4862 | 16200 | 0.1206 | - | - |
| 2.5169 | 16400 | 0.1211 | - | - |
| 2.5476 | 16600 | 0.1209 | - | - |
| 2.5783 | 16800 | 0.1195 | - | - |
| 2.6090 | 17000 | 0.1192 | - | - |
| 2.6397 | 17200 | 0.1176 | - | - |
| 2.6703 | 17400 | 0.1177 | - | - |
| 2.7010 | 17600 | 0.1168 | - | - |
| 2.7317 | 17800 | 0.1163 | - | - |
| 2.7624 | 18000 | 0.116 | - | - |
| 2.7931 | 18200 | 0.1165 | - | - |
| 2.8238 | 18400 | 0.1157 | - | - |
| 2.8545 | 18600 | 0.1145 | - | - |
| 2.8852 | 18800 | 0.1154 | - | - |
| 2.9159 | 19000 | 0.1153 | - | - |
| 2.9466 | 19200 | 0.1132 | - | - |
| 2.9773 | 19400 | 0.1128 | - | - |
| 3.0080 | 19600 | 0.1121 | - | - |
| 3.0387 | 19800 | 0.1099 | - | - |
| **3.0694** | **20000** | **0.1087** | **-** | **-** |
| 0 | 0 | - | - | 0.5022 |
| **3.0694** | **20000** | **-** | **1.1151** | **-** |
| 3.1001 | 20200 | 0.1086 | - | - |
| 3.1308 | 20400 | 0.108 | - | - |
| 3.1614 | 20600 | 0.1087 | - | - |
| 3.1921 | 20800 | 0.1084 | - | - |
| 3.2228 | 21000 | 0.1072 | - | - |
| 3.2535 | 21200 | 0.1087 | - | - |
| 3.2842 | 21400 | 0.1067 | - | - |
| 3.3149 | 21600 | 0.1073 | - | - |
| 3.3456 | 21800 | 0.1067 | - | - |
| 3.3763 | 22000 | 0.1045 | - | - |
| 3.4070 | 22200 | 0.105 | - | - |
| 3.4377 | 22400 | 0.1046 | - | - |
| 3.4684 | 22600 | 0.1061 | - | - |
| 3.4991 | 22800 | 0.1043 | - | - |
| 3.5298 | 23000 | 0.105 | - | - |
| 3.5605 | 23200 | 0.105 | - | - |
| 3.5912 | 23400 | 0.1047 | - | - |
| 3.6219 | 23600 | 0.1034 | - | - |
| 3.6525 | 23800 | 0.1037 | - | - |
| 3.6832 | 24000 | 0.1042 | - | - |
| 3.7139 | 24200 | 0.1038 | - | - |
| 3.7446 | 24400 | 0.1039 | - | - |
| 3.7753 | 24600 | 0.1031 | - | - |
| 3.8060 | 24800 | 0.1019 | - | - |
| 3.8367 | 25000 | 0.1023 | - | - |
| 3.8674 | 25200 | 0.1036 | - | - |
| 3.8981 | 25400 | 0.1022 | - | - |
| 3.9288 | 25600 | 0.102 | - | - |
| 3.9595 | 25800 | 0.1022 | - | - |
| 3.9902 | 26000 | 0.1017 | - | - |
| 4.0209 | 26200 | 0.0997 | - | - |
| 4.0516 | 26400 | 0.0992 | - | - |
| 4.0823 | 26600 | 0.0993 | - | - |
| 4.1130 | 26800 | 0.099 | - | - |
| 4.1436 | 27000 | 0.098 | - | - |
| 4.1743 | 27200 | 0.0986 | - | - |
| 4.2050 | 27400 | 0.0987 | - | - |
| 4.2357 | 27600 | 0.0993 | - | - |
| 4.2664 | 27800 | 0.0991 | - | - |
| 4.2971 | 28000 | 0.0993 | - | - |
| 4.3278 | 28200 | 0.098 | - | - |
| 4.3585 | 28400 | 0.0979 | - | - |
| 4.3892 | 28600 | 0.0967 | - | - |
| 4.4199 | 28800 | 0.0983 | - | - |
| 4.4506 | 29000 | 0.0976 | - | - |
| 4.4813 | 29200 | 0.0975 | - | - |
| 4.5120 | 29400 | 0.0979 | - | - |
| 4.5427 | 29600 | 0.0971 | - | - |
| 4.5734 | 29800 | 0.0972 | - | - |
| 4.6041 | 30000 | 0.0969 | - | - |
| 4.6347 | 30200 | 0.0972 | - | - |
| 4.6654 | 30400 | 0.0975 | - | - |
| 4.6961 | 30600 | 0.0987 | - | - |
| 4.7268 | 30800 | 0.0964 | - | - |
| 4.7575 | 31000 | 0.0974 | - | - |
| 4.7882 | 31200 | 0.0964 | - | - |
| 4.8189 | 31400 | 0.0974 | - | - |
| 4.8496 | 31600 | 0.0974 | - | - |
| 4.8803 | 31800 | 0.0975 | - | - |
| 4.9110 | 32000 | 0.097 | - | - |
| 4.9417 | 32200 | 0.0973 | - | - |
| 4.9724 | 32400 | 0.0973 | - | - |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 4.0.1
- PyLate: 1.1.7
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
KotaroKinoshita/yomitoku-text-detector-dbnet-v2 | KotaroKinoshita | 2025-04-03T20:25:49Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-03T20:25:35Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
parasail-ai/UI-TARS-72B-DPO | parasail-ai | 2025-04-03T20:25:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"gui",
"conversational",
"en",
"arxiv:2501.12326",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-03T20:26:18Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- gui
library_name: transformers
---
# UI-TARS-72B-DPO
[UI-TARS-2B-SFT](https://huggingface.co/bytedance-research/UI-TARS-2B-SFT) |
[UI-TARS-7B-SFT](https://huggingface.co/bytedance-research/UI-TARS-7B-SFT) |
[**UI-TARS-7B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO)(Recommended) |
[UI-TARS-72B-SFT](https://huggingface.co/bytedance-research/UI-TARS-72B-SFT) |
[**UI-TARS-72B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-72B-DPO)(Recommended)
## Introduction
UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
<!--  -->
<p align="center">
<img src="https://github.com/bytedance/UI-TARS/blob/main/figures/UI-TARS-vs-Previous-SOTA.png?raw=true" width="90%"/>
<p>
<p align="center">
<img src="https://github.com/bytedance/UI-TARS/blob/main/figures/UI-TARS.png?raw=true" width="90%"/>
<p>
<!--  -->
This repository contains the model for the paper [UI-TARS: Pioneering Automated GUI Interaction with Native Agents](https://huggingface.co/papers/2501.12326).
Code: https://github.com/bytedance/UI-TARS
## Performance
**Perception Capabilty Evaluation**
| Model | VisualWebBench | WebSRC | SQAshort |
|---------------------------|---------------|---------|----------|
| Qwen2-VL-7B | 73.3 | 81.8 | 84.9 |
| Qwen-VL-Max | 74.1 | 91.1 | 78.6 |
| Gemini-1.5-Pro | 75.4 | 88.9 | 82.2 |
| UIX-Qwen2-7B | 75.9 | 82.9 | 78.8 |
| Claude-3.5-Sonnet | 78.2 | 90.4 | 83.1 |
| GPT-4o | 78.5 | 87.7 | 82.3 |
| **UI-TARS-2B** | 72.9 | 89.2 | 86.4 |
| **UI-TARS-7B** | 79.7 | **93.6** | 87.7 |
| **UI-TARS-72B** | **82.8** | 89.3 | **88.6** |
**Grounding Capability Evaluation**
- **ScreenSpot Pro**
| Agent Model | Dev-Text | Dev-Icon | Dev-Avg | Creative-Text | Creative-Icon | Creative-Avg | CAD-Text | CAD-Icon | CAD-Avg | Scientific-Text | Scientific-Icon | Scientific-Avg | Office-Text | Office-Icon | Office-Avg | OS-Text | OS-Icon | OS-Avg | Avg-Text | Avg-Icon | Avg |
|--------------------------|----------|----------|----------|--------------|--------------|--------------|---------|---------|---------|---------------|---------------|---------------|------------|------------|------------|--------|--------|--------|---------|---------|------|
| QwenVL-7B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7 | 0.0 | 0.4 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | **0.1** |
| GPT-4o | 1.3 | 0.0 | 0.7 | 1.0 | 0.0 | 0.6 | 2.0 | 0.0 | 1.5 | 2.1 | 0.0 | 1.2 | 1.1 | 0.0 | 0.9 | 0.0 | 0.0 | 0.0 | 1.3 | 0.0 | **0.8** |
| SeeClick | 0.6 | 0.0 | 0.3 | 1.0 | 0.0 | 0.6 | 2.5 | 0.0 | 1.9 | 3.5 | 0.0 | 2.0 | 1.1 | 0.0 | 0.9 | 2.8 | 0.0 | 1.5 | 1.8 | 0.0 | **1.1** |
| Qwen2-VL-7B | 2.6 | 0.0 | 1.3 | 1.5 | 0.0 | 0.9 | 0.5 | 0.0 | 0.4 | 6.3 | 0.0 | 3.5 | 3.4 | 1.9 | 3.0 | 0.9 | 0.0 | 0.5 | 2.5 | 0.2 | **1.6** |
| OS-Atlas-4B | 7.1 | 0.0 | 3.7 | 3.0 | 1.4 | 2.3 | 2.0 | 0.0 | 1.5 | 9.0 | 5.5 | 7.5 | 5.1 | 3.8 | 4.8 | 5.6 | 0.0 | 3.1 | 5.0 | 1.7 | **3.7** |
| ShowUI-2B | 16.9 | 1.4 | 9.4 | 9.1 | 0.0 | 5.3 | 2.5 | 0.0 | 1.9 | 13.2 | 7.3 | 10.6 | 15.3 | 7.5 | 13.5 | 10.3 | 2.2 | 6.6 | 10.8 | 2.6 | **7.7** |
| CogAgent-18B | 14.9 | 0.7 | 8.0 | 9.6 | 0.0 | 5.6 | 7.1 | 3.1 | 6.1 | 22.2 | 1.8 | 13.4 | 13.0 | 0.0 | 10.0 | 5.6 | 0.0 | 3.1 | 12.0 | 0.8 | **7.7** |
| Aria-UI | 16.2 | 0.0 | 8.4 | 23.7 | 2.1 | 14.7 | 7.6 | 1.6 | 6.1 | 27.1 | 6.4 | 18.1 | 20.3 | 1.9 | 16.1 | 4.7 | 0.0 | 2.6 | 17.1 | 2.0 | **11.3** |
| UGround-7B | 26.6 | 2.1 | 14.7 | 27.3 | 2.8 | 17.0 | 14.2 | 1.6 | 11.1 | 31.9 | 2.7 | 19.3 | 31.6 | 11.3 | 27.0 | 17.8 | 0.0 | 9.7 | 25.0 | 2.8 | **16.5** |
| Claude Computer Use | 22.0 | 3.9 | 12.6 | 25.9 | 3.4 | 16.8 | 14.5 | 3.7 | 11.9 | 33.9 | 15.8 | 25.8 | 30.1 | 16.3 | 26.9 | 11.0 | 4.5 | 8.1 | 23.4 | 7.1 | **17.1** |
| OS-Atlas-7B | 33.1 | 1.4 | 17.7 | 28.8 | 2.8 | 17.9 | 12.2 | 4.7 | 10.3 | 37.5 | 7.3 | 24.4 | 33.9 | 5.7 | 27.4 | 27.1 | 4.5 | 16.8 | 28.1 | 4.0 | **18.9** |
| UGround-V1-7B | - | - | 35.5 | - | - | 27.8 | - | - | 13.5 | - | - | 38.8 | - | - | 48.8 | - | - | 26.1 | - | - | **31.1** |
| **UI-TARS-2B** | 47.4 | 4.1 | 26.4 | 42.9 | 6.3 | 27.6 | 17.8 | 4.7 | 14.6 | 56.9 | 17.3 | 39.8 | 50.3 | 17.0 | 42.6 | 21.5 | 5.6 | 14.3 | 39.6 | 8.4 | **27.7** |
| **UI-TARS-7B** | 58.4 | 12.4 | 36.1 | 50.0 | 9.1 | 32.8 | **20.8**| 9.4 | **18.0**| 63.9 | **31.8** | **50.0** | **63.3** | 20.8 | 53.5 | 30.8 | **16.9**| 24.5 | 47.8 | 16.2 | **35.7** |
| **UI-TARS-72B** | **63.0** | **17.3** | **40.8** | **57.1** | **15.4** | **39.6** | 18.8 | **12.5**| 17.2 | **64.6** | 20.9 | 45.7 | **63.3** | **26.4** | **54.8** | **42.1**| 15.7 | **30.1**| **50.9**| **17.5**| **38.1** |
- **ScreenSpot**
| Method | Mobile-Text | Mobile-Icon/Widget | Desktop-Text | Desktop-Icon/Widget | Web-Text | Web-Icon/Widget | Avg |
|--------|-------------|-------------|-------------|-------------|-------------|---------|---------|
| **Agent Framework** | | | | | | | |
| GPT-4 (SeeClick) | 76.6 | 55.5 | 68.0 | 28.6 | 40.9 | 23.3 | **48.8** |
| GPT-4 (OmniParser) | 93.9 | 57.0 | 91.3 | 63.6 | 81.3 | 51.0 | **73.0** |
| GPT-4 (UGround-7B) | 90.1 | 70.3 | 87.1 | 55.7 | 85.7 | 64.6 | **75.6** |
| GPT-4o (SeeClick) | 81.0 | 59.8 | 69.6 | 33.6 | 43.9 | 26.2 | **52.3** |
| GPT-4o (UGround-7B) | 93.4 | 76.9 | 92.8 | 67.9 | 88.7 | 68.9 | **81.4** |
| **Agent Model** | | | | | | | |
| GPT-4 | 22.6 | 24.5 | 20.2 | 11.8 | 9.2 | 8.8 | **16.2** |
| GPT-4o | 20.2 | 24.9 | 21.1 | 23.6 | 12.2 | 7.8 | **18.3** |
| CogAgent | 67.0 | 24.0 | 74.2 | 20.0 | 70.4 | 28.6 | **47.4** |
| SeeClick | 78.0 | 52.0 | 72.2 | 30.0 | 55.7 | 32.5 | **53.4** |
| Qwen2-VL | 75.5 | 60.7 | 76.3 | 54.3 | 35.2 | 25.7 | **55.3** |
| UGround-7B | 82.8 | 60.3 | 82.5 | 63.6 | 80.4 | 70.4 | **73.3** |
| Aguvis-G-7B | 88.3 | 78.2 | 88.1 | 70.7 | 85.7 | 74.8 | **81.8** |
| OS-Atlas-7B | 93.0 | 72.9 | 91.8 | 62.9 | 90.9 | 74.3 | **82.5** |
| Claude Computer Use | - | - | - | - | - | - | **83.0** |
| Gemini 2.0 (Project Mariner) | - | - | - | - | - | - | **84.0** |
| Aguvis-7B | **95.6** | 77.7 | 93.8 | 67.1 | 88.3 | 75.2 | **84.4** |
| Aguvis-72B | 94.5 | **85.2** | 95.4 | 77.9 | **91.3** | **85.9** | **89.2** |
| **Our Model** | | | | | | | |
| **UI-TARS-2B** | 93.0 | 75.5 | 90.7 | 68.6 | 84.3 | 74.8 | **82.3** |
| **UI-TARS-7B** | 94.5 | **85.2** | **95.9** | 85.7 | 90.0 | 83.5 | **89.5** |
| **UI-TARS-72B** | 94.9 | 82.5 | 89.7 | **88.6** | 88.7 | 85.0 | **88.4** |
- **ScreenSpot v2**
| Method | Mobile-Text | Mobile-Icon/Widget | Desktop-Text | Desktop-Icon/Widget | Web-Text | Web-Icon/Widget | Avg |
|--------|-------------|-------------|-------------|-------------|-------------|---------|---------|
| **Agent Framework** | | | | | | | |
| GPT-4o (SeeClick) | 85.2 | 58.8 | 79.9 | 37.1 | 72.7 | 30.1 | **63.6** |
| GPT-4o (OS-Atlas-4B) | 95.5 | 75.8 | 79.4 | 49.3 | 90.2 | 66.5 | **79.1** |
| GPT-4o (OS-Atlas-7B) | 96.2 | 83.4 | 89.7 | 69.3 | **94.0** | 79.8 | **87.1** |
| **Agent Model** | | | | | | | |
| SeeClick | 78.4 | 50.7 | 70.1 | 29.3 | 55.2 | 32.5 | **55.1** |
| OS-Atlas-4B | 87.2 | 59.7 | 72.7 | 46.4 | 85.9 | 63.1 | **71.9** |
| OS-Atlas-7B | 95.2 | 75.8 | 90.7 | 63.6 | 90.6 | 77.3 | **84.1** |
| **Our Model** | | | | | | | |
| **UI-TARS-2B** | 95.2 | 79.1 | 90.7 | 68.6 | 87.2 | 78.3 | **84.7** |
| **UI-TARS-7B** | **96.9** | **89.1** | **95.4** | 85.0 | 93.6 | 85.2 | **91.6** |
| **UI-TARS-72B** | 94.8 | 86.3 | 91.2 | **87.9** | 91.5 | **87.7** | **90.3** |
**Offline Agent Capability Evaluation**
- **Multimodal Mind2Web**
| Method | Cross-Task Ele.Acc | Cross-Task Op.F1 | Cross-Task Step SR | Cross-Website Ele.Acc | Cross-Website Op.F1 | Cross-Website Step SR | Cross-Domain Ele.Acc | Cross-Domain Op.F1 | Cross-Domain Step SR |
|--------|----------------------|-------------------|--------------------|----------------------|--------------------|-------------------|--------------------|-------------------|-------------------|
| **Agent Framework** | | | | | | | | | |
| GPT-4o (SeeClick) | 32.1 | - | - | 33.1 | - | - | 33.5 | - | - |
| GPT-4o (UGround) | 47.7 | - | - | 46.0 | - | - | 46.6 | - | - |
| GPT-4o (Aria-UI) | 57.6 | - | - | 57.7 | - | - | 61.4 | - | - |
| GPT-4V (OmniParser) | 42.4 | 87.6 | 39.4 | 41.0 | 84.8 | 36.5 | 45.5 | 85.7 | 42.0 |
| **Agent Model** | | | | | | | | | |
| GPT-4o | 5.7 | 77.2 | 4.3 | 5.7 | 79.0 | 3.9 | 5.5 | 86.4 | 4.5 |
| GPT-4 (SOM) | 29.6 | - | 20.3 | 20.1 | - | 13.9 | 27.0 | - | 23.7 |
| GPT-3.5 (Text-only) | 19.4 | 59.2 | 16.8 | 14.9 | 56.5 | 14.1 | 25.2 | 57.9 | 24.1 |
| GPT-4 (Text-only) | 40.8 | 63.1 | 32.3 | 30.2 | 61.0 | 27.0 | 35.4 | 61.9 | 29.7 |
| Claude | 62.7 | 84.7 | 53.5 | 59.5 | 79.6 | 47.7 | 64.5 | 85.4 | 56.4 |
| Aguvis-7B | 64.2 | 89.8 | 60.4 | 60.7 | 88.1 | 54.6 | 60.4 | 89.2 | 56.6 |
| CogAgent | - | - | 62.3 | - | - | 54.0 | - | - | 59.4 |
| Aguvis-72B | 69.5 | 90.8 | 64.0 | 62.6 | 88.6 | 56.5 | 63.5 | 88.5 | 58.2 |
| **Our Model** | | | | | | | | | |
| **UI-TARS-2B** | 62.3 | 90.0 | 56.3 | 58.5 | 87.2 | 50.8 | 58.8 | 89.6 | 52.3 |
| **UI-TARS-7B** | 73.1 | 92.2 | 67.1 | 68.2 | 90.9 | 61.7 | 66.6 | 90.9 | 60.5 |
| **UI-TARS-72B** | **74.7** | **92.5** | **68.6** | **72.4** | **91.2** | **63.5** | **68.9** | **91.8** | **62.1** |
- **Android Control and GUI Odyssey**
| Agent Models | AndroidControl-Low Type | AndroidControl-Low Grounding | AndroidControl-Low SR | AndroidControl-High Type | AndroidControl-High Grounding | AndroidControl-High SR | GUIOdyssey Type | GUIOdyssey Grounding | GUIOdyssey SR |
|---------------------|----------------------|----------------------|----------------|----------------------|----------------------|----------------|----------------|----------------|----------------|
| Claude | 74.3 | 0.0 | 19.4 | 63.7 | 0.0 | 12.5 | 60.9 | 0.0 | 3.1 |
| GPT-4o | 74.3 | 0.0 | 19.4 | 66.3 | 0.0 | 20.8 | 34.3 | 0.0 | 3.3 |
| SeeClick | 93.0 | 73.4 | 75.0 | 82.9 | 62.9 | 59.1 | 71.0 | 52.4 | 53.9 |
| InternVL-2-4B | 90.9 | 84.1 | 80.1 | 84.1 | 72.7 | 66.7 | 82.1 | 55.5 | 51.5 |
| Qwen2-VL-7B | 91.9 | 86.5 | 82.6 | 83.8 | 77.7 | 69.7 | 83.5 | 65.9 | 60.2 |
| Aria-UI | -- | 87.7 | 67.3 | -- | 43.2 | 10.2 | -- | 86.8 | 36.5 |
| OS-Atlas-4B | 91.9 | 83.8 | 80.6 | 84.7 | 73.8 | 67.5 | 83.5 | 61.4 | 56.4 |
| OS-Atlas-7B | 93.6 | 88.0 | 85.2 | 85.2 | 78.5 | 71.2 | 84.5 | 67.8 | 62.0 |
| Aguvis-7B | -- | -- | 80.5 | -- | -- | 61.5 | -- | -- | -- |
| Aguvis-72B | -- | -- | 84.4 | -- | -- | 66.4 | -- | -- | -- |
| **UI-TARS-2B** | **98.1** | 87.3 | 89.3 | 81.2 | 78.4 | 68.9 | 93.9 | 86.8 | 83.4 |
| **UI-TARS-7B** | 98.0 | 89.3 | 90.8 | 83.7 | 80.5 | 72.5 | 94.6 | 90.1 | 87.0 |
| **UI-TARS-72B** | **98.1** | **89.9** | **91.3** | **85.2** | **81.5** | **74.7** | **95.4** | **91.4** | **88.6** |
**Online Agent Capability Evaluation**
| Method | OSWorld (Online) | AndroidWorld (Online) |
|--------|-------------------|------------------|
| **Agent Framework** | | |
| GPT-4o (UGround) | - | 32.8 |
| GPT-4o (Aria-UI) | 15.2 | 44.8 |
| GPT-4o (Aguvis-7B) | 14.8 | 37.1 |
| GPT-4o (Aguvis-72B) | 17.0 | - |
| GPT-4o (OS-Atlas-7B) | 14.6 | - |
| **Agent Model** | | |
| GPT-4o | 5.0 | 34.5 (SoM) |
| Gemini-Pro-1.5 | 5.4 | 22.8 (SoM) |
| Aguvis-72B | 10.3 | 26.1 |
| Claude Computer-Use | 14.9 (15 steps) | 27.9 |
| Claude Computer-Use | 22.0 (50 steps) | - |
| **Our Model** | | |
| **UI-TARS-7B-SFT** | 17.7 (15 steps) | 33.0 |
| **UI-TARS-7B-DPO** | 18.7 (15 steps) | - |
| **UI-TARS-72B-SFT** | 18.8 (15 steps) | **46.6** |
| **UI-TARS-72B-DPO** | **22.7** (15 steps) | - |
| **UI-TARS-72B-DPO** | **24.6** (50 steps) | - |
## Citation
If you find our paper and model useful in your research, feel free to give us a cite.
```BibTeX
@article{qin2025ui,
title={UI-TARS: Pioneering Automated GUI Interaction with Native Agents},
author={Qin, Yujia and Ye, Yining and Fang, Junjie and Wang, Haoming and Liang, Shihao and Tian, Shizuo and Zhang, Junda and Li, Jiahao and Li, Yunxin and Huang, Shijue and others},
journal={arXiv preprint arXiv:2501.12326},
year={2025}
}
``` |
mradermacher/Nemo-DPO-V20-GGUF | mradermacher | 2025-04-03T20:22:51Z | 488 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cloudyu/Nemo-DPO-V20",
"base_model:quantized:cloudyu/Nemo-DPO-V20",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T03:02:52Z | ---
base_model: cloudyu/Nemo-DPO-V20
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cloudyu/Nemo-DPO-V20
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold3 | genki10 | 2025-04-03T20:21:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T07:13:21Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw030_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0606
- Qwk: 0.3523
- Mse: 1.0607
- Rmse: 1.0299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 9.8583 | 0.0018 | 9.8566 | 3.1395 |
| No log | 2.0 | 6 | 6.9476 | 0.0002 | 6.9461 | 2.6355 |
| No log | 3.0 | 9 | 4.8466 | 0.0250 | 4.8453 | 2.2012 |
| No log | 4.0 | 12 | 3.6078 | 0.0038 | 3.6069 | 1.8992 |
| No log | 5.0 | 15 | 2.1557 | 0.1503 | 2.1550 | 1.4680 |
| No log | 6.0 | 18 | 2.0821 | 0.0440 | 2.0811 | 1.4426 |
| No log | 7.0 | 21 | 1.4057 | 0.0302 | 1.4050 | 1.1853 |
| No log | 8.0 | 24 | 1.0612 | 0.0365 | 1.0607 | 1.0299 |
| No log | 9.0 | 27 | 0.8100 | 0.3550 | 0.8096 | 0.8998 |
| No log | 10.0 | 30 | 1.0159 | 0.0953 | 1.0155 | 1.0077 |
| No log | 11.0 | 33 | 0.9867 | 0.1343 | 0.9864 | 0.9932 |
| No log | 12.0 | 36 | 0.7023 | 0.4473 | 0.7024 | 0.8381 |
| No log | 13.0 | 39 | 0.6716 | 0.4789 | 0.6720 | 0.8197 |
| No log | 14.0 | 42 | 0.6881 | 0.4228 | 0.6886 | 0.8298 |
| No log | 15.0 | 45 | 0.9623 | 0.3555 | 0.9627 | 0.9812 |
| No log | 16.0 | 48 | 0.6409 | 0.4799 | 0.6415 | 0.8009 |
| No log | 17.0 | 51 | 0.6242 | 0.4968 | 0.6247 | 0.7904 |
| No log | 18.0 | 54 | 0.7232 | 0.4728 | 0.7237 | 0.8507 |
| No log | 19.0 | 57 | 0.8762 | 0.4176 | 0.8766 | 0.9363 |
| No log | 20.0 | 60 | 0.7242 | 0.4773 | 0.7249 | 0.8514 |
| No log | 21.0 | 63 | 0.8218 | 0.4462 | 0.8223 | 0.9068 |
| No log | 22.0 | 66 | 0.9877 | 0.3748 | 0.9879 | 0.9939 |
| No log | 23.0 | 69 | 0.7740 | 0.4838 | 0.7748 | 0.8802 |
| No log | 24.0 | 72 | 1.3164 | 0.2495 | 1.3160 | 1.1472 |
| No log | 25.0 | 75 | 1.3457 | 0.2485 | 1.3452 | 1.1598 |
| No log | 26.0 | 78 | 0.7355 | 0.4914 | 0.7362 | 0.8580 |
| No log | 27.0 | 81 | 0.6711 | 0.4714 | 0.6715 | 0.8194 |
| No log | 28.0 | 84 | 1.1469 | 0.3297 | 1.1467 | 1.0708 |
| No log | 29.0 | 87 | 0.6755 | 0.4932 | 0.6761 | 0.8222 |
| No log | 30.0 | 90 | 0.6891 | 0.4816 | 0.6898 | 0.8305 |
| No log | 31.0 | 93 | 1.3251 | 0.2592 | 1.3250 | 1.1511 |
| No log | 32.0 | 96 | 1.0606 | 0.3523 | 1.0607 | 1.0299 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
MALIKVARUN/varunm | MALIKVARUN | 2025-04-03T20:20:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T20:20:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.29 +/- 14.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
priyanshu745/distilbert | priyanshu745 | 2025-04-03T20:20:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-03T20:19:14Z | ---
license: apache-2.0
pipeline_tag: text-classification
library_name: transformers
--- |
mradermacher/Gemma-3-4B-StockMix-GGUF | mradermacher | 2025-04-03T20:20:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"huihui-ai/gemma-3-4b-it-abliterated",
"en",
"base_model:bunnycore/Gemma-3-4B-StockMix",
"base_model:quantized:bunnycore/Gemma-3-4B-StockMix",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T20:06:59Z | ---
base_model: bunnycore/Gemma-3-4B-StockMix
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- huihui-ai/gemma-3-4b-it-abliterated
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Gemma-3-4B-StockMix
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-4B-StockMix-GGUF/resolve/main/Gemma-3-4B-StockMix.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
azurethunder10/ok | azurethunder10 | 2025-04-03T20:19:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-03T20:19:48Z | ---
license: apache-2.0
---
|
ddh0/tensor-type-testing | ddh0 | 2025-04-03T20:18:52Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2025-04-03T20:12:21Z | ---
license: unknown
---
# Tensor Type Testing
> [!TIP]
> Skip to the bottom of this document for a TL;DR
For more info, see [llama.cpp #12511: Handle user-defined quantization levels for additional tensors](https://github.com/ggml-org/llama.cpp/pull/12511) by @EAddario
Testing done by @ddh0 using [this branch](https://github.com/EAddario/llama.cpp/tree/quantize) as of committ [5a304b8](https://github.com/EAddario/llama.cpp/commit/5a304b8e26b8c53f43e8d12515e52f9bb7d199f0). Using libllama built for Linux CUDA.
## Quantization naming scheme
```
Model-Name-E{TYPE_EMBD}-F{TYPE_FFN}-A{TYPE_ATTN}-O{TYPE_OUTPUT}.gguf
```
for example `Llama-3.1-8B-Instruct-EQ4_K-FQ4_K-AQ8_0-OQ8_0.gguf`:
- Model is Llama 3.1 8B Instruct
- TYPE_EMBD (token embeddings) are in Q4_K
- TYPE_FFN (MLP / feed-forward tensors) are in Q4_K
- TYPE_ATTN (K,Q,V attention and attention output tensors) are in Q8_0
- TYPE_OUTPUT (output tensor) is in Q8_0
---
## Command template
```bash
TYPE_EMBD=GGML_TYPE
TYPE_FFN=GGML_TYPE
TYPE_ATTN=GGML_TYPE
TYPE_OUTPUT=GGML_TYPE
SRC_GGUF=/my/model/orig.gguf
DST_GGUF=/my/model/quant.gguf
N_THREADS=4
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
## Commands used for Llama 3.2
---
### Crush token embeddings to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q2_K
TYPE_FFN=Q8_0
TYPE_ATTN=Q8_0
TYPE_OUTPUT=Q8_0
SRC_GGUF=/opt/workspace/gguf/Llama-3.2-3B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Llama-3.2-3B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
### Crush FFN to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q8_0
TYPE_FFN=Q2_K
TYPE_ATTN=Q8_0
TYPE_OUTPUT=Q8_0
SRC_GGUF=/opt/workspace/gguf/Llama-3.2-3B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Llama-3.2-3B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
### Crush attention to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q8_0
TYPE_FFN=Q8_0
TYPE_ATTN=Q2_K
TYPE_OUTPUT=Q8_0
SRC_GGUF=/opt/workspace/gguf/Llama-3.2-3B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Llama-3.2-3B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
### Crush output tensor to Q2_K, otherwise Q8_0 ⚠️
> **This quant was not included in the testing because Llama 3.2 3B has no output tensor! The resulting file is the same as a normal Q8_0.**
```bash
TYPE_EMBD=Q8_0
TYPE_FFN=Q8_0
TYPE_ATTN=Q8_0
TYPE_OUTPUT=Q2_K
SRC_GGUF=/opt/workspace/gguf/Llama-3.2-3B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Llama-3.2-3B-EQ8_0-FQ8_0-AQ8_0-OQ2_K.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
## Raw results for Llama 3.2 3B
```
Number of input texts: 10
Shortest input length in tokens: 55
Longest input length in tokens: 4678
Average input length in tokens: 1605.5
Total number of input tokens: 16055
--------------------------------------------------------------------------------
Evaluating baseline model Llama-3.2-3B-BF16.gguf...
Load model...
Evaluate prompts...
Unload model...
--------------------------------------------------------------------------------
Now processing: Llama-3.2-3B-Q2_K.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Llama-3.2-3B-BF16.gguf vs. Llama-3.2-3B-Q2_K.gguf:
-- Prompt 0: 1.2261667251586914
-- Prompt 1: 1.1347604990005493
-- Prompt 2: 1.388033390045166
-- Prompt 3: 1.1053369045257568
-- Prompt 4: 1.7510676383972168
-- Prompt 5: 4.586221218109131
-- Prompt 6: 1.3651360273361206
-- Prompt 7: 0.8970077037811279
-- Prompt 8: 0.3409916162490845
-- Prompt 9: 1.2506738901138306
Average MSD: 1.5045396089553833
--------------------------------------------------------------------------------
Now processing: Llama-3.2-3B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Llama-3.2-3B-BF16.gguf vs. Llama-3.2-3B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf:
-- Prompt 0: 0.3589555025100708
-- Prompt 1: 0.1420530527830124
-- Prompt 2: 0.3871675133705139
-- Prompt 3: 0.38336610794067383
-- Prompt 4: 0.4630553722381592
-- Prompt 5: 0.3928600549697876
-- Prompt 6: 0.46294596791267395
-- Prompt 7: 0.41983363032341003
-- Prompt 8: 0.0822080597281456
-- Prompt 9: 0.3548887372016907
Average MSD: 0.34473341703414917
--------------------------------------------------------------------------------
Now processing: Llama-3.2-3B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Llama-3.2-3B-BF16.gguf vs. Llama-3.2-3B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf:
-- Prompt 0: 4.409396648406982
-- Prompt 1: 2.431891679763794
-- Prompt 2: 5.892056941986084
-- Prompt 3: 4.688146591186523
-- Prompt 4: 6.351741313934326
-- Prompt 5: 8.826679229736328
-- Prompt 6: 4.506043434143066
-- Prompt 7: 4.613113880157471
-- Prompt 8: 1.0596126317977905
-- Prompt 9: 4.1558661460876465
Average MSD: 4.693454742431641
--------------------------------------------------------------------------------
Now processing: Llama-3.2-3B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Llama-3.2-3B-BF16.gguf vs. Llama-3.2-3B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf:
-- Prompt 0: 1.0618470907211304
-- Prompt 1: 1.1212399005889893
-- Prompt 2: 1.3122810125350952
-- Prompt 3: 0.9195016026496887
-- Prompt 4: 1.201547622680664
-- Prompt 5: 5.760651111602783
-- Prompt 6: 1.0914928913116455
-- Prompt 7: 0.9646959900856018
-- Prompt 8: 0.41648873686790466
-- Prompt 9: 1.4317259788513184
Average MSD: 1.5281471014022827
--------------------------------------------------------------------------------
Now processing: Llama-3.2-3B-Q8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Llama-3.2-3B-BF16.gguf vs. Llama-3.2-3B-Q8_0.gguf:
-- Prompt 0: 0.0023212190717458725
-- Prompt 1: 0.0014450754970312119
-- Prompt 2: 0.003914575092494488
-- Prompt 3: 0.002514646854251623
-- Prompt 4: 0.003313937224447727
-- Prompt 5: 0.004224818665534258
-- Prompt 6: 0.0026909655425697565
-- Prompt 7: 0.0033839084208011627
-- Prompt 8: 0.0015104531776160002
-- Prompt 9: 0.002354747150093317
Average MSD: 0.0027674345765262842
--------------------------------------------------------------------------------
Average Mean-Squared Deviation compared to Llama-3.2-3B-BF16.gguf:
--------------------------------------------------------------------------------
Llama-3.2-3B-Q2_K.gguf -- 1.5045396089553833
Llama-3.2-3B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf -- 0.34473341703414917
Llama-3.2-3B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf -- 4.693454742431641
Llama-3.2-3B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf -- 1.5281471014022827
Llama-3.2-3B-Q8_0.gguf -- 0.0027674345765262842
--------------------------------------------------------------------------------
```
---
## Commands used for Qwen2.5-14B
---
### Crush token embeddings to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q2_K
TYPE_FFN=Q8_0
TYPE_ATTN=Q8_0
TYPE_OUTPUT=Q8_0
SRC_GGUF=/opt/workspace/gguf/Qwen2.5-14B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Qwen2.5-14B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
### Crush FFNs to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q8_0
TYPE_FFN=Q2_K
TYPE_ATTN=Q8_0
TYPE_OUTPUT=Q8_0
SRC_GGUF=/opt/workspace/gguf/Qwen2.5-14B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Qwen2.5-14B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
### Crush attention to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q8_0
TYPE_FFN=Q8_0
TYPE_ATTN=Q2_K
TYPE_OUTPUT=Q8_0
SRC_GGUF=/opt/workspace/gguf/Qwen2.5-14B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Qwen2.5-14B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
### Crush output tensor to Q2_K, otherwise Q8_0
```bash
TYPE_EMBD=Q8_0
TYPE_FFN=Q8_0
TYPE_ATTN=Q8_0
TYPE_OUTPUT=Q2_K
SRC_GGUF=/opt/workspace/gguf/Qwen2.5-14B-BF16.gguf
DST_GGUF=/opt/workspace/gguf/Qwen2.5-14B-EQ8_0-FQ8_0-AQ8_0-OQ2_K.gguf
N_THREADS=16
./llama.cpp/build/bin/llama-quantize --token-embedding-type $TYPE_EMBD --tensor-type ffn_down=$TYPE_FFN --tensor-type ffn_gate=$TYPE_FFN --tensor-type ffn_up=$TYPE_FFN --tensor-type attn_k=$TYPE_ATTN --tensor-type attn_q=$TYPE_ATTN --tensor-type attn_v=$TYPE_ATTN --tensor-type attn_out=$TYPE_ATTN --output-tensor-type $TYPE_OUTPUT $SRC_GGUF $DST_GGUF $TYPE_FFN $N_THREADS
```
---
## Raw results for Qwen2.5-14B
```
Number of input texts: 10
Shortest input length in tokens: 60
Longest input length in tokens: 4801
Average input length in tokens: 1589.3
Total number of input tokens: 15893
--------------------------------------------------------------------------------
Evaluating baseline model Qwen2.5-14B-BF16.gguf...
Load model...
Evaluate prompts...
Unload model...
--------------------------------------------------------------------------------
Now processing: Qwen2.5-14B-Q2_K.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Qwen2.5-14B-BF16.gguf vs. Qwen2.5-14B-Q2_K.gguf:
-- Prompt 0: 1.568434476852417
-- Prompt 1: 1.8605916500091553
-- Prompt 2: 1.2912431955337524
-- Prompt 3: 1.3367090225219727
-- Prompt 4: 1.1364308595657349
-- Prompt 5: 2.3384993076324463
-- Prompt 6: 1.2926896810531616
-- Prompt 7: 1.4084643125534058
-- Prompt 8: 0.32443684339523315
-- Prompt 9: 1.3756331205368042
Average MSD: 1.3933132886886597
--------------------------------------------------------------------------------
Now processing: Qwen2.5-14B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Qwen2.5-14B-BF16.gguf vs. Qwen2.5-14B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf:
-- Prompt 0: 0.012962134554982185
-- Prompt 1: 0.019185630604624748
-- Prompt 2: 0.05430002510547638
-- Prompt 3: 0.008174948394298553
-- Prompt 4: 0.011592703871428967
-- Prompt 5: 0.012105505913496017
-- Prompt 6: 0.007557644974440336
-- Prompt 7: 0.01957087405025959
-- Prompt 8: 0.013395288027822971
-- Prompt 9: 0.007488884497433901
Average MSD: 0.01663336530327797
--------------------------------------------------------------------------------
Now processing: Qwen2.5-14B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Qwen2.5-14B-BF16.gguf vs. Qwen2.5-14B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf:
-- Prompt 0: 2.483222246170044
-- Prompt 1: 2.20788836479187
-- Prompt 2: 2.2648935317993164
-- Prompt 3: 2.175588607788086
-- Prompt 4: 1.624481439590454
-- Prompt 5: 4.104475498199463
-- Prompt 6: 2.0161893367767334
-- Prompt 7: 2.0660784244537354
-- Prompt 8: 0.46407243609428406
-- Prompt 9: 2.1939690113067627
Average MSD: 2.160086154937744
--------------------------------------------------------------------------------
Now processing: Qwen2.5-14B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Qwen2.5-14B-BF16.gguf vs. Qwen2.5-14B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf:
-- Prompt 0: 0.7283403277397156
-- Prompt 1: 1.0912593603134155
-- Prompt 2: 0.9022651314735413
-- Prompt 3: 0.4880850911140442
-- Prompt 4: 0.29713207483291626
-- Prompt 5: 0.6994995474815369
-- Prompt 6: 0.45846545696258545
-- Prompt 7: 0.5286242365837097
-- Prompt 8: 0.2947601079940796
-- Prompt 9: 0.5722559690475464
Average MSD: 0.6060687303543091
--------------------------------------------------------------------------------
Now processing: Qwen2.5-14B-EQ8_0-FQ8_0-AQ8_0-OQ2_K.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Qwen2.5-14B-BF16.gguf vs. Qwen2.5-14B-EQ8_0-FQ8_0-AQ8_0-OQ2_K.gguf:
-- Prompt 0: 1.2783535718917847
-- Prompt 1: 0.4481557607650757
-- Prompt 2: 1.1880418062210083
-- Prompt 3: 1.0997036695480347
-- Prompt 4: 0.8093082308769226
-- Prompt 5: 0.6486296057701111
-- Prompt 6: 1.1238276958465576
-- Prompt 7: 1.1459368467330933
-- Prompt 8: 0.23579858243465424
-- Prompt 9: 1.238993525505066
Average MSD: 0.9216748476028442
--------------------------------------------------------------------------------
Now processing: Qwen2.5-14B-Q8_0.gguf
Load model...
Evaluate prompts...
Unload model...
Compute MSD...
Mean-Squared Deviation - Qwen2.5-14B-BF16.gguf vs. Qwen2.5-14B-Q8_0.gguf:
-- Prompt 0: 0.0059487177059054375
-- Prompt 1: 0.004823403432965279
-- Prompt 2: 0.011750683188438416
-- Prompt 3: 0.004459250718355179
-- Prompt 4: 0.004037810489535332
-- Prompt 5: 0.0039064036682248116
-- Prompt 6: 0.004684466868638992
-- Prompt 7: 0.004520604852586985
-- Prompt 8: 0.004727284424006939
-- Prompt 9: 0.004541514907032251
Average MSD: 0.0053400141187012196
--------------------------------------------------------------------------------
Average Mean-Squared Deviation compared to Qwen2.5-14B-BF16.gguf:
--------------------------------------------------------------------------------
Qwen2.5-14B-Q2_K.gguf -- 1.3933132886886597
Qwen2.5-14B-EQ2_K-FQ8_0-AQ8_0-OQ8_0.gguf -- 0.01663336530327797
Qwen2.5-14B-EQ8_0-FQ2_K-AQ8_0-OQ8_0.gguf -- 2.160086154937744
Qwen2.5-14B-EQ8_0-FQ8_0-AQ2_K-OQ8_0.gguf -- 0.6060687303543091
Qwen2.5-14B-EQ8_0-FQ8_0-AQ8_0-OQ2_K.gguf -- 0.9216748476028442
Qwen2.5-14B-Q8_0.gguf -- 0.0053400141187012196
--------------------------------------------------------------------------------
```
---
## TL;DR
Mean-Squared Deviation as compared to BF16, average over 10 inputs (lower is better):
| | Q2_K | Crush TYPE_EMBD | Crush TYPE_FFN | Crush TYPE_ATTN | Crush TYPE_OUTPUT | Q8_0 |
| ------------ | -------- | --------------- | -------------- | --------------- | ----------------- | ---------- |
| Llama 3.2 3B | 1.504 | 0.344 | 4.693 | 1.528 | N/A | 0.002 |
| Qwen2.5-14B | 1.393 | 0.016 | 2.160 | 0.606 | 0.921 | 0.005 |
| **Average** | **1.44** | **0.18** | **3.42** | **1.06** | **0.921** | **0.0035** |
In short, we can see that aggressive quantization of the FFN tensors causes the greatest deviation from BF16, and aggressive quantization of the token embeddings causes the least deviation. Note that deviations greater than ~0.1 start to have a noticeable effect on the quality of the model's output. Realistically, it's probably wise to stick to any combination of Q3_K, Q4_K, Q5_K, Q6_K, and Q8_0 depending on your situation.
|
nichart/SPARE-AD | nichart | 2025-04-03T20:16:54Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-28T19:06:00Z | ---
license: other
license_name: cbica-license
license_link: LICENSE
---
Training a SPARE model (SVC) with 4201 participants
FOLD 1...
FOLD 2...
FOLD 3...
FOLD 4...
FOLD 5...
>> AUC = 0.9399 ± 0.0037
>> Accuracy = 0.8929 ± 0.0058
>> Sensitivity = 0.8781 ± 0.0167
>> Specificity = 0.9024 ± 0.0141
>> Precision = 0.8781 ± 0.0167
>> Recall = 0.8511 ± 0.0184
>> F1 = 0.8641 ± 0.0056 |
Katrun/Frankenthaler_style_sd2_LoRA | Katrun | 2025-04-03T20:16:13Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-04-03T20:16:07Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: Helen Frankenthaler
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Katrun/Frankenthaler_style_sd2_LoRA
<Gallery />
## Model description
These are Katrun/Frankenthaler_style_sd2_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use Helen Frankenthaler to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Katrun/Frankenthaler_style_sd2_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T20:16:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"dataset:shisa-ai/translation_expanded_master_set_filtered",
"dataset:shisa-ai/rewild-set",
"dataset:shisa-ai/magpie-ultra-set",
"dataset:shisa-ai/magpie-advanced-questions-set",
"dataset:shisa-ai/japan-magpie-set",
"dataset:shisa-ai/ko_dataset_conversations",
"dataset:shisa-ai/tmmluplus_sim",
"base_model:shisa-ai/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T19:53:07Z | ---
base_model: shisa-ai/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt
- shisa-ai/shisa-v2-roleplaying-sft
- shisa-ai/translation_expanded_master_set_filtered
- shisa-ai/rewild-set
- shisa-ai/magpie-ultra-set
- shisa-ai/magpie-advanced-questions-set
- shisa-ai/japan-magpie-set
- shisa-ai/ko_dataset_conversations
- shisa-ai/tmmluplus_sim
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-132-geniac.gbs128.1e5-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
elikoy/80s-he-man-toy-lora | elikoy | 2025-04-03T20:15:50Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-04-03T20:15:44Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
80s he-man toy, a donald trump doll toy wears a blue suit and a red tie. It
is in a packaging with a burger and fries accessories. The text says
"COVFEFE" <lora:80S_He-man_toy:0.9>
output:
url: images/00043-3915033353.png
- text: "80s he-man toy, UNICODE\0\0A\0 \0(\0M\0a\0r\0i\0e\0 \0C\0u\0r\0i\0e\0:\01\0.\03\0)\0 \0a\0c\0t\0i\0o\0n\0 \0f\0i\0g\0u\0r\0e\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0a\0n\0d\0 \0l\0i\0f\0e\0l\0i\0k\0e\0,\0 \0e\0n\0c\0l\0o\0s\0e\0d\0 \0i\0n\0 \0a\0 \0p\0l\0a\0s\0t\0i\0c\0 \0b\0o\0x\0.\0 \0T\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0d\0e\0p\0i\0c\0t\0s\0 \0l\0o\0t\0s\0 \0o\0f\0 \0r\0a\0d\0i\0o\0a\0c\0t\0i\0v\0e\0 \0m\0a\0t\0e\0r\0i\0a\0l\0,\0 \0T\0h\0e\0 \0P\0o\0w\0e\0r\0 \0o\0f\0 \0t\0h\0e\0 \0A\0t\0o\0m\0,\0 \0\"\0M\0o\0t\0h\0e\0r\0 \0O\0f\0 \0A\0t\0o\0m\0i\0c\0 \0S\0c\0i\0e\0n\0c\0e\0\"\0 \0t\0i\0t\0l\0e\0.\0 \0G\0o\0n\0z\0o\0.\0 \0<\0l\0o\0r\0a\0:\0g\0o\0n\0z\0a\0_\0v\02\0:\00\0.\08\0>\0"
output:
url: images/11782952.jpeg
- text: 80s he-man toy, action figure
output:
url: images/12547140.jpeg
- text: "80s he-man toy, UNICODE\0\0A\0 \0m\0e\0t\0i\0c\0u\0l\0o\0u\0s\0l\0y\0 \0c\0r\0a\0f\0t\0e\0d\0 \0a\0c\0t\0i\0o\0n\0 \0f\0i\0g\0u\0r\0e\0 \0o\0f\0 \0'\0J\0a\0r\0e\0d\0 \0P\0a\0d\0a\0l\0e\0c\0k\0i\0'\0 \0a\0s\0 \0'\0S\0a\0m\0 \0W\0i\0n\0c\0h\0e\0s\0t\0e\0r\0'\0 \0f\0r\0o\0m\0 \0'\0S\0u\0p\0e\0r\0n\0a\0t\0u\0r\0a\0l\0'\0,\0 \0p\0r\0e\0s\0e\0n\0t\0e\0d\0 \0i\0n\0 \0a\0 \0d\0u\0r\0a\0b\0l\0e\0 \0p\0l\0a\0s\0t\0i\0c\0 \0b\0o\0x\0.\0 \0T\0h\0e\0 \0f\0i\0g\0u\0r\0e\0 \0s\0h\0o\0w\0c\0a\0s\0e\0s\0 \0S\0a\0m\0'\0s\0 \0t\0a\0l\0l\0,\0 \0l\0e\0a\0n\0 \0b\0u\0i\0l\0d\0 \0a\0n\0d\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0h\0i\0s\0 \0t\0y\0p\0i\0c\0a\0l\0 \0a\0t\0t\0i\0r\0e\0 \0o\0f\0 \0a\0 \0l\0a\0y\0e\0r\0e\0d\0 \0s\0h\0i\0r\0t\0 \0a\0n\0d\0 \0j\0a\0c\0k\0e\0t\0,\0 \0a\0l\0o\0n\0g\0 \0w\0i\0t\0h\0 \0h\0i\0s\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0i\0s\0t\0i\0c\0 \0o\0f\0 \0t\0h\0e\0 \0l\0a\0t\0e\0r\0 \0s\0e\0a\0s\0o\0n\0s\0.\0 \0A\0c\0c\0e\0s\0s\0o\0r\0i\0e\0s\0 \0i\0n\0c\0l\0u\0d\0e\0 \0h\0i\0s\0 \0d\0e\0m\0o\0n\0-\0h\0u\0n\0t\0i\0n\0g\0 \0k\0n\0i\0f\0e\0 \0a\0n\0d\0 \0a\0 \0b\0o\0o\0k\0 \0o\0f\0 \0l\0o\0r\0e\0.\0 \0T\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0 \0i\0s\0 \0d\0e\0c\0o\0r\0a\0t\0e\0d\0 \0w\0i\0t\0h\0 \0e\0e\0r\0i\0e\0 \0s\0u\0p\0e\0r\0n\0a\0t\0u\0r\0a\0l\0 \0i\0m\0a\0g\0e\0r\0y\0 \0a\0n\0d\0 \0i\0n\0c\0l\0u\0d\0e\0s\0 \0t\0h\0e\0 \0n\0a\0m\0e\0s\0 \0'\0J\0a\0r\0e\0d\0 \0P\0a\0d\0a\0l\0e\0c\0k\0i\0'\0 \0a\0n\0d\0 \0'\0S\0a\0m\0 \0W\0i\0n\0c\0h\0e\0s\0t\0e\0r\0'\0,\0 \0a\0l\0o\0n\0g\0 \0w\0i\0t\0h\0 \0t\0h\0e\0 \0'\0S\0u\0p\0e\0r\0n\0a\0t\0u\0r\0a\0l\0'\0 \0l\0o\0g\0o\0.\0 \0.\0<\0l\0o\0r\0a\0:\0g\0o\0n\0z\0a\0_\0v\02\0:\00\0.\07\0>\0 \0 \0"
output:
url: images/5053220.jpeg
- text: "80s he-man toy, UNICODE\0\0A\0 \0w\0a\0l\0t\0e\0r\0 \0w\0h\0i\0t\0e\0 \0a\0c\0t\0i\0o\0n\0 \0f\0i\0g\0u\0r\0e\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0a\0n\0d\0 \0l\0i\0f\0e\0l\0i\0k\0e\0,\0 \0e\0n\0c\0l\0o\0s\0e\0d\0 \0i\0n\0 \0a\0 \0p\0l\0a\0s\0t\0i\0c\0 \0b\0o\0x\0.\0 \0T\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0d\0e\0p\0i\0c\0t\0s\0 \0a\0 \0d\0a\0r\0k\0 \0g\0r\0e\0e\0n\0 \0b\0o\0a\0r\0d\0 \0w\0i\0t\0h\0 \0t\0h\0e\0 \0p\0e\0r\0i\0o\0d\0i\0c\0 \0t\0a\0b\0l\0e\0,\0 \0 \0h\0a\0z\0m\0a\0t\0 \0s\0u\0i\0t\0,\0 \0n\0o\0 \0h\0e\0l\0m\0a\0t\0,\0 \0 \0<\0l\0o\0r\0a\0:\0g\0o\0n\0z\0a\0_\0v\02\0:\00\0.\08\0>\0"
output:
url: images/4703970.jpeg
- text: "80s he-man toy, UNICODE\0\0A\0 \01\09\08\00\0 \0K\0a\0m\0a\0l\0a\0 \0H\0a\0r\0r\0i\0s\0 \0a\0c\0t\0i\0o\0n\0 \0f\0i\0g\0u\0r\0e\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0a\0n\0d\0 \0l\0i\0f\0e\0l\0i\0k\0e\0,\0 \0e\0n\0c\0l\0o\0s\0e\0d\0 \0i\0n\0 \0a\0 \0p\0l\0a\0s\0t\0i\0c\0 \0b\0o\0x\0.\0 \0T\0h\0e\0 \0f\0i\0g\0u\0r\0e\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0K\0a\0m\0a\0l\0a\0 \0H\0a\0r\0r\0i\0s\0 \0r\0e\0c\0o\0g\0n\0i\0z\0a\0b\0l\0e\0 \0s\0h\0o\0u\0l\0d\0e\0r\0-\0l\0e\0n\0g\0t\0h\0 \0b\0r\0u\0n\0e\0t\0t\0e\0 \0b\0o\0b\0c\0u\0t\0 \0h\0a\0i\0r\0,\0 \0b\0r\0o\0w\0n\0 \0w\0o\0m\0e\0n\0'\0s\0 \0b\0l\0a\0z\0e\0r\0,\0 \0w\0h\0i\0t\0e\0 \0s\0h\0i\0r\0t\0 \0w\0i\0t\0h\0 \0s\0t\0i\0f\0f\0 \0c\0o\0l\0l\0a\0r\0,\0 \0a\0n\0d\0 \0m\0e\0n\0;\0s\0 \0s\0l\0a\0c\0k\0s\0.\0 \0T\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0d\0e\0p\0i\0c\0t\0s\0 \0s\0c\0e\0n\0e\0s\0 \0f\0r\0o\0m\0 \0t\0h\0e\0 \0k\0a\0m\0a\0l\0a\0 \0h\0a\0r\0r\0i\0s\0 \0w\0i\0l\0l\0 \0t\0o\0 \0w\0i\0n\0"
output:
url: images/22516640.jpeg
- text: "80s he-man toy, UNICODE\0\0a\0 \01\09\08\08\0 \0h\0y\0p\0e\0r\0r\0e\0a\0l\0i\0s\0t\0i\0c\0 \0(\0(\0s\0e\0n\0s\0u\0o\0u\0s\0l\0y\0 \0u\0l\0t\0r\0a\0 \0h\0i\0g\0h\0-\0q\0u\0a\0l\0i\0t\0y\0)\0)\0 \0j\0o\0i\0n\0t\0e\0d\0 \0a\0c\0t\0i\0o\0n\0 \0f\0i\0g\0u\0r\0e\0 \0t\0h\0a\0t\0 \0p\0r\0e\0c\0i\0s\0e\0l\0y\0 \0r\0e\0s\0e\0m\0b\0l\0e\0s\0 \0A\0 \0F\0a\0m\0o\0u\0s\0 \0M\0o\0t\0h\0e\0r\0 \0 \0s\0u\0p\0e\0r\0s\0t\0a\0r\0 \0(\0(\0(\0(\0K\0r\0i\0s\0 \0J\0e\0n\0n\0e\0r\0)\0)\0)\0)\0 \0i\0n\0 \0b\0l\0u\0e\0-\0a\0n\0d\0-\0b\0l\0a\0c\0k\0 \0 \0w\0o\0m\0e\0n\0'\0s\0 \0b\0l\0a\0z\0e\0r\0,\0 \0w\0h\0i\0t\0e\0 \0s\0h\0i\0r\0t\0,\0 \0b\0l\0u\0e\0 \0m\0e\0n\0;\0s\0 \0s\0l\0a\0c\0k\0s\0,\0 \0b\0o\0l\0d\0 \0e\0y\0e\0l\0i\0n\0e\0r\0 \0m\0a\0s\0c\0a\0r\0a\0,\0 \0T\0h\0e\0 \0f\0i\0g\0u\0r\0i\0n\0e\0 \0i\0s\0 \0s\0c\0u\0l\0p\0t\0u\0r\0a\0l\0l\0y\0 \0a\0d\0v\0a\0n\0c\0e\0d\0,\0 \0a\0r\0t\0i\0c\0u\0l\0a\0t\0e\0d\0,\0 \0a\0n\0d\0 \0c\0a\0p\0t\0u\0r\0e\0s\0 \0K\0r\0i\0s\0 \0J\0e\0n\0n\0e\0r\0'\0s\0 \0a\0c\0c\0u\0r\0a\0t\0e\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0w\0i\0t\0h\0 \0p\0l\0a\0y\0f\0u\0l\0 \0r\0e\0a\0l\0i\0s\0m\0,\0 \0T\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0a\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0 \0c\0a\0r\0i\0c\0a\0t\0u\0r\0e\0 \0a\0r\0t\0w\0o\0r\0k\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0 \0o\0f\0 \0k\0r\0i\0s\0 \0j\0e\0n\0n\0e\0r\0r\0,\0 \0a\0d\0d\0i\0n\0g\0 \0a\0 \0p\0l\0a\0y\0f\0u\0l\0 \0c\0o\0n\0t\0r\0a\0s\0t\0 \0t\0o\0 \0t\0h\0e\0 \0r\0e\0a\0l\0i\0s\0m\0 \0o\0f\0 \0t\0h\0e\0 \0f\0i\0g\0u\0r\0e\0,\0 \0l\0o\0c\0a\0t\0e\0d\0 \0o\0n\0 \0t\0h\0e\0 \0r\0i\0g\0h\0t\0 \0s\0i\0d\0e\0 \0o\0f\0 \0t\0h\0e\0 \0p\0a\0c\0k\0a\0g\0e\0,\0 \0w\0e\0a\0r\0i\0n\0g\0 \0m\0a\0t\0c\0h\0i\0n\0g\0 \0b\0l\0u\0e\0-\0a\0n\0d\0-\0b\0l\0a\0c\0k\0 \0 \0w\0o\0m\0e\0n\0'\0s\0 \0b\0l\0a\0z\0e\0r\0,\0 \0w\0h\0i\0t\0e\0 \0s\0h\0i\0r\0t\0,\0 \0b\0l\0u\0e\0 \0m\0e\0n\0;\0s\0 \0s\0l\0a\0c\0k\0s\0.\0 \0H\0e\0r\0 \0n\0a\0m\0e\0,\0 \0\"\0K\0R\0I\0S\0 \0J\0E\0N\0N\0E\0R\0\"\0,\0 \0i\0s\0 \0p\0r\0i\0n\0t\0e\0d\0 \0p\0r\0o\0m\0i\0n\0e\0n\0t\0l\0y\0 \0i\0n\0 \0b\0o\0l\0d\0,\0 \0w\0h\0i\0m\0s\0i\0c\0a\0l\0 \0l\0e\0t\0t\0e\0r\0s\0 \0o\0n\0 \0t\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0,\0 \0m\0a\0r\0k\0e\0t\0i\0n\0g\0 \0h\0e\0r\0 \0r\0e\0n\0o\0w\0n\0e\0d\0 \0i\0d\0e\0n\0t\0i\0t\0y\0.\0"
output:
url: images/22516459.jpeg
- text: "80s he-man toy, UNICODE\0\0A\0 \01\09\08\00\0 \0K\0a\0m\0a\0l\0a\0 \0H\0a\0r\0r\0i\0s\0 \0a\0c\0t\0i\0o\0n\0 \0f\0i\0g\0u\0r\0e\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0a\0n\0d\0 \0l\0i\0f\0e\0l\0i\0k\0e\0,\0 \0e\0n\0c\0l\0o\0s\0e\0d\0 \0i\0n\0 \0a\0 \0p\0l\0a\0s\0t\0i\0c\0 \0b\0o\0x\0.\0 \0T\0h\0e\0 \0f\0i\0g\0u\0r\0e\0 \0f\0e\0a\0t\0u\0r\0e\0s\0 \0K\0a\0m\0a\0l\0a\0 \0H\0a\0r\0r\0i\0s\0 \0r\0e\0c\0o\0g\0n\0i\0z\0a\0b\0l\0e\0 \0s\0h\0o\0u\0l\0d\0e\0r\0-\0l\0e\0n\0g\0t\0h\0 \0b\0r\0u\0n\0e\0t\0t\0e\0 \0b\0o\0b\0c\0u\0t\0 \0h\0a\0i\0r\0,\0 \0b\0r\0o\0w\0n\0 \0w\0o\0m\0e\0n\0'\0s\0 \0b\0l\0a\0z\0e\0r\0,\0 \0w\0h\0i\0t\0e\0 \0s\0h\0i\0r\0t\0 \0w\0i\0t\0h\0 \0s\0t\0i\0f\0f\0 \0c\0o\0l\0l\0a\0r\0,\0 \0a\0n\0d\0 \0m\0e\0n\0;\0s\0 \0s\0l\0a\0c\0k\0s\0.\0 \0T\0h\0e\0 \0p\0a\0c\0k\0a\0g\0i\0n\0g\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0d\0e\0p\0i\0c\0t\0s\0 \0s\0c\0e\0n\0e\0s\0 \0f\0r\0o\0m\0 \0t\0h\0e\0 \0k\0a\0m\0a\0l\0a\0 \0h\0a\0r\0r\0i\0s\0 \0w\0i\0l\0l\0 \0t\0o\0 \0w\0i\0n\0"
output:
url: images/22448194.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 80s he-man toy
---
# 80s he-man toy
<Gallery />
## Model description
80s he-man toy, a donald trump doll toy wears a blue suit and a red tie. It is in a packaging with a burger and fries accessories. The text says "COVFEFE" <lora:80S_He-man_toy:0.9>

## Trigger words
You should use `80s he-man toy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/elikoy/80s-he-man-toy-lora/tree/main) them in the Files & versions tab.
|
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold2 | genki10 | 2025-04-03T20:12:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T07:03:06Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw030_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0061
- Qwk: 0.2594
- Mse: 1.0060
- Rmse: 1.0030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.1999 | 0.0005 | 8.2001 | 2.8636 |
| No log | 2.0 | 6 | 5.3717 | 0.0366 | 5.3719 | 2.3177 |
| No log | 3.0 | 9 | 3.4922 | 0.0 | 3.4925 | 1.8688 |
| No log | 4.0 | 12 | 2.4710 | 0.0139 | 2.4714 | 1.5721 |
| No log | 5.0 | 15 | 1.6968 | 0.0422 | 1.6973 | 1.3028 |
| No log | 6.0 | 18 | 1.1874 | 0.0 | 1.1879 | 1.0899 |
| No log | 7.0 | 21 | 0.9127 | 0.0069 | 0.9131 | 0.9556 |
| No log | 8.0 | 24 | 1.0118 | 0.0174 | 1.0122 | 1.0061 |
| No log | 9.0 | 27 | 0.7545 | 0.3841 | 0.7547 | 0.8687 |
| No log | 10.0 | 30 | 1.3825 | 0.2252 | 1.3828 | 1.1759 |
| No log | 11.0 | 33 | 0.8139 | 0.4517 | 0.8140 | 0.9022 |
| No log | 12.0 | 36 | 0.7660 | 0.3697 | 0.7662 | 0.8753 |
| No log | 13.0 | 39 | 0.7768 | 0.3524 | 0.7769 | 0.8814 |
| No log | 14.0 | 42 | 1.0432 | 0.2662 | 1.0432 | 1.0214 |
| No log | 15.0 | 45 | 1.4484 | 0.2263 | 1.4481 | 1.2034 |
| No log | 16.0 | 48 | 0.5584 | 0.5392 | 0.5581 | 0.7471 |
| No log | 17.0 | 51 | 0.7575 | 0.4882 | 0.7573 | 0.8702 |
| No log | 18.0 | 54 | 2.2370 | 0.1477 | 2.2363 | 1.4954 |
| No log | 19.0 | 57 | 0.5319 | 0.5722 | 0.5317 | 0.7291 |
| No log | 20.0 | 60 | 1.1213 | 0.3593 | 1.1209 | 1.0587 |
| No log | 21.0 | 63 | 0.8766 | 0.3950 | 0.8762 | 0.9361 |
| No log | 22.0 | 66 | 1.3210 | 0.1808 | 1.3204 | 1.1491 |
| No log | 23.0 | 69 | 1.0514 | 0.2160 | 1.0508 | 1.0251 |
| No log | 24.0 | 72 | 0.8912 | 0.3101 | 0.8907 | 0.9438 |
| No log | 25.0 | 75 | 1.2625 | 0.1467 | 1.2621 | 1.1235 |
| No log | 26.0 | 78 | 1.0112 | 0.2495 | 1.0109 | 1.0054 |
| No log | 27.0 | 81 | 0.9639 | 0.3227 | 0.9637 | 0.9817 |
| No log | 28.0 | 84 | 0.8281 | 0.4141 | 0.8278 | 0.9098 |
| No log | 29.0 | 87 | 1.5125 | 0.2320 | 1.5123 | 1.2297 |
| No log | 30.0 | 90 | 0.6534 | 0.5310 | 0.6531 | 0.8081 |
| No log | 31.0 | 93 | 1.3984 | 0.2492 | 1.3983 | 1.1825 |
| No log | 32.0 | 96 | 0.6678 | 0.5155 | 0.6675 | 0.8170 |
| No log | 33.0 | 99 | 0.9190 | 0.3503 | 0.9188 | 0.9585 |
| No log | 34.0 | 102 | 1.0061 | 0.2594 | 1.0060 | 1.0030 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF | mradermacher | 2025-04-03T20:10:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"agent",
"coding",
"en",
"base_model:JackCloudman/openhands-lm-32b-v0.1-jackterated",
"base_model:quantized:JackCloudman/openhands-lm-32b-v0.1-jackterated",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-03T13:48:45Z | ---
base_model: JackCloudman/openhands-lm-32b-v0.1-jackterated
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- agent
- coding
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/JackCloudman/openhands-lm-32b-v0.1-jackterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Serbero2025/mediapanel2 | Serbero2025 | 2025-04-03T20:09:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-03T20:07:37Z | ---
license: apache-2.0
---
|
bowilleatyou/bf9bb93f-890d-4008-ace1-645b11a104fe | bowilleatyou | 2025-04-03T20:08:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-03T15:18:22Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VarvaraG/pokemon_pic_LoRA | VarvaraG | 2025-04-03T20:08:16Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-04-03T20:08:10Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: 'pokemon picture, '
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - VarvaraG/pokemon_pic_LoRA
<Gallery />
## Model description
These are VarvaraG/pokemon_pic_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use pokemon picture, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](VarvaraG/pokemon_pic_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ahmed-masry/lilt-mlm-23438 | ahmed-masry | 2025-04-03T20:08:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lilt",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-03T20:02:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtommasi/q-FrozenLake-v1-4x4-noSlippery | rtommasi | 2025-04-03T20:07:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-03T20:07:28Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rtommasi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T20:07:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"dataset:shisa-ai/translation_expanded_master_set_filtered",
"dataset:shisa-ai/rewild-set",
"dataset:shisa-ai/magpie-ultra-set",
"dataset:shisa-ai/magpie-advanced-questions-set",
"dataset:shisa-ai/japan-magpie-set",
"base_model:shisa-ai/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T19:44:10Z | ---
base_model: shisa-ai/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt
- shisa-ai/shisa-v2-roleplaying-sft
- shisa-ai/translation_expanded_master_set_filtered
- shisa-ai/rewild-set
- shisa-ai/magpie-ultra-set
- shisa-ai/magpie-advanced-questions-set
- shisa-ai/japan-magpie-set
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-127-shisav2.gbs128.5e6-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CatkinChen/babyai-classical-ppo-experiments-2025-04-03_20-00-28 | CatkinChen | 2025-04-03T20:06:56Z | 0 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-04-03T20:00:33Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
gbelewade/test-mt5-base-eng-yor-stem | gbelewade | 2025-04-03T20:03:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-03T20:01:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Burhanerdem/BRBR | Burhanerdem | 2025-04-03T20:03:45Z | 0 | 0 | null | [
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"region:us"
] | null | 2025-04-03T20:01:05Z | ---
base_model:
- black-forest-labs/FLUX.1-dev
--- |
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold1 | genki10 | 2025-04-03T20:02:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-25T06:53:09Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_AugV8_k3_task1_organization_sp020_lw030_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5668
- Qwk: 0.1774
- Mse: 1.5646
- Rmse: 1.2509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 9.8803 | 0.0 | 9.8777 | 3.1429 |
| No log | 2.0 | 6 | 8.5074 | 0.0 | 8.5050 | 2.9163 |
| No log | 3.0 | 9 | 6.1849 | 0.0 | 6.1827 | 2.4865 |
| No log | 4.0 | 12 | 4.5241 | 0.0040 | 4.5220 | 2.1265 |
| No log | 5.0 | 15 | 3.1005 | 0.0 | 3.0987 | 1.7603 |
| No log | 6.0 | 18 | 1.9344 | 0.0315 | 1.9328 | 1.3902 |
| No log | 7.0 | 21 | 1.1911 | 0.0 | 1.1897 | 1.0907 |
| No log | 8.0 | 24 | 1.0618 | 0.0106 | 1.0604 | 1.0298 |
| No log | 9.0 | 27 | 0.9848 | 0.0117 | 0.9835 | 0.9917 |
| No log | 10.0 | 30 | 1.2768 | 0.1105 | 1.2752 | 1.1293 |
| No log | 11.0 | 33 | 1.0524 | 0.1585 | 1.0509 | 1.0252 |
| No log | 12.0 | 36 | 1.1397 | 0.1857 | 1.1381 | 1.0668 |
| No log | 13.0 | 39 | 1.0129 | 0.2508 | 1.0113 | 1.0057 |
| No log | 14.0 | 42 | 1.4825 | 0.1671 | 1.4805 | 1.2168 |
| No log | 15.0 | 45 | 0.8716 | 0.3369 | 0.8700 | 0.9327 |
| No log | 16.0 | 48 | 1.4877 | 0.1779 | 1.4854 | 1.2188 |
| No log | 17.0 | 51 | 0.7820 | 0.3723 | 0.7810 | 0.8838 |
| No log | 18.0 | 54 | 1.2322 | 0.2378 | 1.2303 | 1.1092 |
| No log | 19.0 | 57 | 0.8243 | 0.3610 | 0.8232 | 0.9073 |
| No log | 20.0 | 60 | 1.6826 | 0.1259 | 1.6804 | 1.2963 |
| No log | 21.0 | 63 | 1.2579 | 0.2195 | 1.2561 | 1.1208 |
| No log | 22.0 | 66 | 1.7878 | 0.1315 | 1.7854 | 1.3362 |
| No log | 23.0 | 69 | 1.2458 | 0.2445 | 1.2440 | 1.1154 |
| No log | 24.0 | 72 | 1.3340 | 0.2095 | 1.3321 | 1.1542 |
| No log | 25.0 | 75 | 1.2146 | 0.2585 | 1.2127 | 1.1012 |
| No log | 26.0 | 78 | 1.5301 | 0.1756 | 1.5278 | 1.2360 |
| No log | 27.0 | 81 | 1.2785 | 0.2104 | 1.2764 | 1.1298 |
| No log | 28.0 | 84 | 1.3053 | 0.2051 | 1.3033 | 1.1416 |
| No log | 29.0 | 87 | 1.5093 | 0.1698 | 1.5072 | 1.2277 |
| No log | 30.0 | 90 | 1.1779 | 0.2384 | 1.1762 | 1.0845 |
| No log | 31.0 | 93 | 1.7735 | 0.1518 | 1.7712 | 1.3309 |
| No log | 32.0 | 96 | 1.5668 | 0.1774 | 1.5646 | 1.2509 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
elikoy/actionhero | elikoy | 2025-04-03T19:58:15Z | 0 | 1 | null | [
"arxiv:1910.09700",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"region:us"
] | null | 2025-04-03T19:56:58Z | ---
base_model:
- black-forest-labs/FLUX.1-dev
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf | RichardErkhov | 2025-04-03T19:56:13Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T19:20:45Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-Kemonai-Ecommerce-ChatBot - GGUF
- Model creator: https://huggingface.co/chibexme/
- Original model: https://huggingface.co/chibexme/llama-3.2-3b-Kemonai-Ecommerce-ChatBot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF | mradermacher | 2025-04-03T19:53:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:shisa-ai/shisav1-deepseek-ai_DeepSeek-V3-0324-reannotated-filtered",
"dataset:shisa-ai/shisa-v2-roleplaying-sft",
"dataset:shisa-ai/translation_expanded_master_set_filtered",
"dataset:shisa-ai/rewild-set",
"dataset:shisa-ai/magpie-ultra-set",
"dataset:shisa-ai/magpie-advanced-questions-set",
"dataset:shisa-ai/japan-magpie-set",
"base_model:shisa-ai/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b",
"base_model:quantized:shisa-ai/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T19:26:00Z | ---
base_model: shisa-ai/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b
datasets:
- shisa-ai/shisav1-deepseek-ai_DeepSeek-V3-0324-reannotated-filtered
- shisa-ai/shisa-v2-roleplaying-sft
- shisa-ai/translation_expanded_master_set_filtered
- shisa-ai/rewild-set
- shisa-ai/magpie-ultra-set
- shisa-ai/magpie-advanced-questions-set
- shisa-ai/japan-magpie-set
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shisa-ai/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-139-shisav2.ds.gbs128.1e5-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fbaldassarri/openlm-research_open_llama_7b_v2-autogptq-int8-gs128-sym | fbaldassarri | 2025-04-03T19:52:43Z | 0 | 0 | null | [
"safetensors",
"llama",
"pytorch",
"causal-lm",
"OpenLLaMA",
"autoround",
"auto-round",
"intel-autoround",
"gptq",
"auto-gptq",
"autogptq",
"woq",
"intel",
"openlm-research",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"base_model:openlm-research/open_llama_7b_v2",
"base_model:quantized:openlm-research/open_llama_7b_v2",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-04-03T19:50:53Z | ---
tags:
- pytorch
- causal-lm
- OpenLLaMA
- autoround
- auto-round
- intel-autoround
- gptq
- auto-gptq
- autogptq
- woq
- intel
- pytorch
- openlm-research
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
model_name: OpenLLaMA 7B v2
base_model:
- openlm-research/open_llama_7b_v2
inference: false
model_creator: openlm-research
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning.
- 8 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Method AutoGPTQ
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6
Note: this INT8 version of open_llama_7b_v2 has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz
tar -xvzf v0.4.6.tar.gz
cd auto-round-0.4.6
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "openlm-research/open_llama_7b_v2"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 128, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autogptq-int8-gs128-sym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warranty. It has been developed only for research purposes.
|
ashishlmpmishra/zonic-3d-charc | ashishlmpmishra | 2025-04-03T19:51:31Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-03T19:51:14Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/zonic-3d-charc_000500_02_20250403190429_45.png
text: Zonic 3D charc walking in streets --d 45
- output:
url: sample/zonic-3d-charc_001000_02_20250403191421_45.png
text: Zonic 3D charc running from police --d 45
- output:
url: sample/zonic-3d-charc_001500_02_20250403192413_45.png
text: Zonic 3D charc saving a cat from drowning --d 45
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Zonic 3D charc
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Zonic 3D charc
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Zonic 3D charc` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
eggmoo/omega_M5olqGr | eggmoo | 2025-04-03T19:51:18Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-03T19:51:17Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Subsets and Splits