modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nm-testing/Qwen2.5-0.5B-W4A16_channel-e2e | nm-testing | "2025-04-15T02:52:50Z" | 12 | 0 | null | [
"safetensors",
"qwen2",
"compressed-tensors",
"region:us"
] | null | "2025-03-11T02:35:56Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
eeeebbb2/157537f9-541f-404d-b604-da57e1997f26 | eeeebbb2 | "2024-12-07T18:21:59Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | "2024-12-07T18:18:23Z" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 157537f9-541f-404d-b604-da57e1997f26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- ed7ac71786c7da0a_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/ed7ac71786c7da0a_train_data.json
streaming: true
type:
field_input: chosen
field_instruction: prompt
field_output: rejected
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: balanced
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: true
hub_model_id: eeeebbb2/157537f9-541f-404d-b604-da57e1997f26
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
1: 75GB
2: 75GB
3: 75GB
max_steps: 50
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/ed7ac71786c7da0a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
train_on_inputs: false
trust_remote_code: true
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: 157537f9-541f-404d-b604-da57e1997f26
wandb_project: Public_TuningSN
wandb_runid: 157537f9-541f-404d-b604-da57e1997f26
warmup_ratio: 0.04
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 157537f9-541f-404d-b604-da57e1997f26
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9388 | 0.0005 | 1 | 6.9346 |
| 6.9184 | 0.0123 | 25 | 6.9169 |
| 6.9121 | 0.0247 | 50 | 6.9089 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/casarulez_-_merged-vit-bot-gguf | RichardErkhov | "2025-02-20T08:34:39Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-20T08:14:25Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
merged-vit-bot - GGUF
- Model creator: https://huggingface.co/casarulez/
- Original model: https://huggingface.co/casarulez/merged-vit-bot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [merged-vit-bot.Q2_K.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q2_K.gguf) | Q2_K | 0.54GB |
| [merged-vit-bot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [merged-vit-bot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [merged-vit-bot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [merged-vit-bot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [merged-vit-bot.Q3_K.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q3_K.gguf) | Q3_K | 0.64GB |
| [merged-vit-bot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [merged-vit-bot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [merged-vit-bot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [merged-vit-bot.Q4_0.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q4_0.gguf) | Q4_0 | 0.72GB |
| [merged-vit-bot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [merged-vit-bot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [merged-vit-bot.Q4_K.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q4_K.gguf) | Q4_K | 0.75GB |
| [merged-vit-bot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [merged-vit-bot.Q4_1.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q4_1.gguf) | Q4_1 | 0.77GB |
| [merged-vit-bot.Q5_0.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q5_0.gguf) | Q5_0 | 0.83GB |
| [merged-vit-bot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [merged-vit-bot.Q5_K.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q5_K.gguf) | Q5_K | 0.85GB |
| [merged-vit-bot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [merged-vit-bot.Q5_1.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q5_1.gguf) | Q5_1 | 0.89GB |
| [merged-vit-bot.Q6_K.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q6_K.gguf) | Q6_K | 0.95GB |
| [merged-vit-bot.Q8_0.gguf](https://huggingface.co/RichardErkhov/casarulez_-_merged-vit-bot-gguf/blob/main/merged-vit-bot.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farooqkhan2840503/gemma-Instruct-Finetune_25_0.0002-batch1 | farooqkhan2840503 | "2024-03-13T21:12:42Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-13T21:07:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
e-hossam96/arabic-nano-gpt-v0 | e-hossam96 | "2024-11-01T13:27:36Z" | 165 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"ar",
"dataset:wikimedia/wikipedia",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-17T00:20:46Z" | ---
library_name: transformers
license: mit
base_model: openai-community/gpt2
tags:
- generated_from_trainer
model-index:
- name: arabic-nano-gpt
results: []
datasets:
- wikimedia/wikipedia
language:
- ar
---
# arabic-nano-gpt
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the arabic [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
Repository on GitHub: [e-hossam96/arabic-nano-gpt](https://github.com/e-hossam96/arabic-nano-gpt.git)
The model achieves the following results on the held-out test set:
- Loss: 3.28796
## How to Use
```python
import torch
from transformers import pipeline
model_ckpt = "e-hossam96/arabic-nano-gpt-v0"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
lm = pipeline(task="text-generation", model=model_ckpt, device=device)
prompt = """المحرك النفاث هو محرك ينفث الموائع (الماء أو الهواء) بسرعة فائقة \
لينتج قوة دافعة اعتمادا على مبدأ قانون نيوتن الثالث للحركة. \
هذا التعريف الواسع للمحركات النفاثة يتضمن أيضا"""
output = lm(prompt, max_new_tokens=128)
print(output[0]["generated_text"])
```
## Model description
- Embedding Size: 256
- Attention Heads: 4
- Attention Layers: 4
## Training and evaluation data
The entire wikipedia dataset was split into three splits based on the 90-5-5 ratios.
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 24
## Training Loss

## Validation Loss

## Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Ibrahim-Alam/finetuning-xlm-mlm-en-2048-on-sst2 | Ibrahim-Alam | "2023-05-31T18:27:27Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm",
"text-classification",
"generated_from_trainer",
"dataset:sst2",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-31T17:24:55Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- sst2
metrics:
- accuracy
- f1
model-index:
- name: finetuning-xlm-mlm-en-2048-on-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sst2
type: sst2
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5091743119266054
- name: F1
type: f1
value: 0.6747720364741641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-xlm-mlm-en-2048-on-sst2
This model is a fine-tuned version of [xlm-mlm-en-2048](https://huggingface.co/xlm-mlm-en-2048) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6985
- Accuracy: 0.5092
- F1: 0.6748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
huzalisandra/ProiectLFT | huzalisandra | "2024-05-30T15:07:32Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-30T15:07:32Z" | ---
license: apache-2.0
---
|
dkqjrm/20230818214757 | dkqjrm | "2023-08-18T22:20:42Z" | 117 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-08-18T12:48:32Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: '20230818214757'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230818214757
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DucHaiten/DH_ClassicAnime | DucHaiten | "2023-03-02T17:04:56Z" | 58 | 48 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-02-13T15:41:07Z" | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
---
I don't know about you, but in my opinion this is the best anime model I've ever created. With a bit of romance, a little bit of classic and indispensable NSFW, this is my favorite anime model. I even intended to sell it but changed my mind in the end, it wouldn't be good if it couldn't be used by everyone.
After studying this model for a while, I have learned some experiences to create better images:
1. always add the keyword **(80s anime style)** at the beginning of the prompt. added gta style, the trigger keyword is **(gtav style)** note only one keyword can be added in the prompt, gta no anime, anime no gta
2. use this negative prompt <pre>illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyebrows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error</pre>
3. CFG Scale to range from 12.5 to 15
Note that my sample image has no VAE













|
OpenGVLab/Mono-InternVL-2B-S1-2 | OpenGVLab | "2025-03-12T16:25:15Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"vision",
"ocr",
"custom_code",
"moe",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2410.08202",
"base_model:internlm/internlm2-chat-1_8b",
"base_model:merge:internlm/internlm2-chat-1_8b",
"license:mit",
"region:us"
] | image-text-to-text | "2025-02-13T14:04:09Z" | ---
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- internlm/internlm2-chat-1_8b
base_model_relation: merge
language:
- multilingual
tags:
- internvl
- vision
- ocr
- custom_code
- moe
---
# Mono-InternVL-2B-S1-2
This repository contains the Mono-InternVL-2B model after **S1.1 concept learning** and **S1.2 semantic learning**.
Please refer to our [**paper**](https://huggingface.co/papers/2410.08202), [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for introduction and usage.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{luo2024mono,
title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
journal={arXiv preprint arXiv:2410.08202},
year={2024}
}
```
|
Kuongan/CS221-roberta-large-finetuned-semeval-NT | Kuongan | "2024-12-28T11:31:22Z" | 34 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-28T10:44:51Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-roberta-large-finetuned-semeval-NT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-roberta-large-finetuned-semeval-NT
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6250
- F1: 0.7461
- Roc Auc: 0.8116
- Accuracy: 0.4657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4247 | 1.0 | 277 | 0.3771 | 0.6157 | 0.7185 | 0.4025 |
| 0.3123 | 2.0 | 554 | 0.3756 | 0.6707 | 0.7597 | 0.4495 |
| 0.2477 | 3.0 | 831 | 0.3577 | 0.7215 | 0.7856 | 0.4982 |
| 0.153 | 4.0 | 1108 | 0.4303 | 0.7345 | 0.8017 | 0.4711 |
| 0.0938 | 5.0 | 1385 | 0.4975 | 0.7334 | 0.7961 | 0.4657 |
| 0.0761 | 6.0 | 1662 | 0.5342 | 0.7427 | 0.8027 | 0.4819 |
| 0.0475 | 7.0 | 1939 | 0.5857 | 0.7441 | 0.7987 | 0.4458 |
| 0.0165 | 8.0 | 2216 | 0.6250 | 0.7461 | 0.8116 | 0.4657 |
| 0.0077 | 9.0 | 2493 | 0.6812 | 0.7355 | 0.7937 | 0.4567 |
| 0.0065 | 10.0 | 2770 | 0.6681 | 0.7368 | 0.7974 | 0.4874 |
| 0.0093 | 11.0 | 3047 | 0.7421 | 0.7393 | 0.7981 | 0.4603 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
sail-rvc/Miguelillo_RL__RVC_V2_-_240_Epochs_ | sail-rvc | "2023-07-14T07:28:00Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:27:45Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Miguelillo_RL__RVC_V2_-_240_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:28:00
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
mradermacher/KhanomTanLLM-3B-i1-GGUF | mradermacher | "2025-01-19T06:36:39Z" | 428 | 0 | transformers | [
"transformers",
"gguf",
"en",
"th",
"dataset:wannaphong/KhanomTanLLM-pretrained-dataset",
"base_model:pythainlp/KhanomTanLLM-3B",
"base_model:quantized:pythainlp/KhanomTanLLM-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-01-19T06:00:04Z" | ---
base_model: pythainlp/KhanomTanLLM-3B
datasets:
- wannaphong/KhanomTanLLM-pretrained-dataset
language:
- en
- th
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pythainlp/KhanomTanLLM-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/KhanomTanLLM-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q2_K.gguf) | i1-Q2_K | 2.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/KhanomTanLLM-3B-i1-GGUF/resolve/main/KhanomTanLLM-3B.i1-Q6_K.gguf) | i1-Q6_K | 4.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cunghoctienganh/a4f8f6b7-473d-44a0-ad19-9c32ae368864 | cunghoctienganh | "2025-01-15T15:38:30Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-15T15:19:21Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4f8f6b7-473d-44a0-ad19-9c32ae368864
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 80c99709830fd48a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80c99709830fd48a_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/a4f8f6b7-473d-44a0-ad19-9c32ae368864
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/80c99709830fd48a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e24b6a86-83f1-40ca-ac06-bdb6e674fa7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e24b6a86-83f1-40ca-ac06-bdb6e674fa7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a4f8f6b7-473d-44a0-ad19-9c32ae368864
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7471 | 0.1206 | 200 | 0.4312 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf | RichardErkhov | "2025-03-28T00:11:30Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-27T23:00:32Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama_3b_step2_batch_v1 - GGUF
- Model creator: https://huggingface.co/danielgombas/
- Original model: https://huggingface.co/danielgombas/llama_3b_step2_batch_v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama_3b_step2_batch_v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama_3b_step2_batch_v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama_3b_step2_batch_v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama_3b_step2_batch_v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama_3b_step2_batch_v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama_3b_step2_batch_v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama_3b_step2_batch_v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama_3b_step2_batch_v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama_3b_step2_batch_v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama_3b_step2_batch_v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama_3b_step2_batch_v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama_3b_step2_batch_v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama_3b_step2_batch_v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama_3b_step2_batch_v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama_3b_step2_batch_v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama_3b_step2_batch_v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama_3b_step2_batch_v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama_3b_step2_batch_v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama_3b_step2_batch_v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama_3b_step2_batch_v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama_3b_step2_batch_v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama_3b_step2_batch_v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/danielgombas_-_llama_3b_step2_batch_v1-gguf/blob/main/llama_3b_step2_batch_v1.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama_3b_step2_batch_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_3b_step2_batch_v1
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0531 | 0.0170 | 50 | 1.2007 |
| 1.0336 | 0.0341 | 100 | 1.1242 |
| 0.9428 | 0.0511 | 150 | 1.0800 |
| 1.4386 | 0.0682 | 200 | 1.0408 |
| 0.8375 | 0.0852 | 250 | 1.0127 |
| 0.9193 | 0.1023 | 300 | 0.9817 |
| 1.0368 | 0.1193 | 350 | 0.9573 |
| 1.2018 | 0.1364 | 400 | 0.9319 |
| 1.2749 | 0.1534 | 450 | 0.9072 |
| 0.9881 | 0.1704 | 500 | 0.8820 |
| 0.9707 | 0.1875 | 550 | 0.8599 |
| 1.2377 | 0.2045 | 600 | 0.8412 |
| 0.9024 | 0.2216 | 650 | 0.8180 |
| 0.5889 | 0.2386 | 700 | 0.8024 |
| 0.8046 | 0.2557 | 750 | 0.7899 |
| 0.83 | 0.2727 | 800 | 0.7710 |
| 0.6852 | 0.2898 | 850 | 0.7548 |
| 0.8512 | 0.3068 | 900 | 0.7422 |
| 0.8377 | 0.3238 | 950 | 0.7345 |
| 0.5361 | 0.3409 | 1000 | 0.7220 |
| 0.7696 | 0.3579 | 1050 | 0.7105 |
| 0.8175 | 0.3750 | 1100 | 0.7013 |
| 0.6144 | 0.3920 | 1150 | 0.6886 |
| 0.3598 | 0.4091 | 1200 | 0.6809 |
| 0.7176 | 0.4261 | 1250 | 0.6692 |
| 0.5281 | 0.4432 | 1300 | 0.6644 |
| 0.3555 | 0.4602 | 1350 | 0.6547 |
| 0.9024 | 0.4772 | 1400 | 0.6471 |
| 0.7713 | 0.4943 | 1450 | 0.6386 |
| 0.6172 | 0.5113 | 1500 | 0.6322 |
| 0.6325 | 0.5284 | 1550 | 0.6266 |
| 0.7503 | 0.5454 | 1600 | 0.6206 |
| 0.349 | 0.5625 | 1650 | 0.6136 |
| 0.7 | 0.5795 | 1700 | 0.6085 |
| 0.5014 | 0.5966 | 1750 | 0.6023 |
| 0.6441 | 0.6136 | 1800 | 0.5975 |
| 0.5066 | 0.6306 | 1850 | 0.5921 |
| 0.6036 | 0.6477 | 1900 | 0.5883 |
| 0.6549 | 0.6647 | 1950 | 0.5840 |
| 0.3903 | 0.6818 | 2000 | 0.5789 |
| 0.8864 | 0.6988 | 2050 | 0.5754 |
| 0.7164 | 0.7159 | 2100 | 0.5709 |
| 0.5504 | 0.7329 | 2150 | 0.5687 |
| 0.4216 | 0.7500 | 2200 | 0.5646 |
| 0.4241 | 0.7670 | 2250 | 0.5618 |
| 0.6452 | 0.7840 | 2300 | 0.5590 |
| 0.7067 | 0.8011 | 2350 | 0.5558 |
| 0.4536 | 0.8181 | 2400 | 0.5537 |
| 0.8657 | 0.8352 | 2450 | 0.5508 |
| 0.7452 | 0.8522 | 2500 | 0.5483 |
| 0.3444 | 0.8693 | 2550 | 0.5458 |
| 0.2889 | 0.8863 | 2600 | 0.5437 |
| 0.2415 | 0.9034 | 2650 | 0.5401 |
| 0.5393 | 0.9204 | 2700 | 0.5385 |
| 0.4866 | 0.9374 | 2750 | 0.5372 |
| 0.9233 | 0.9545 | 2800 | 0.5347 |
| 0.4623 | 0.9715 | 2850 | 0.5318 |
| 0.4211 | 0.9886 | 2900 | 0.5299 |
| 0.4308 | 1.0056 | 2950 | 0.5283 |
| 0.618 | 1.0227 | 3000 | 0.5285 |
| 0.7693 | 1.0397 | 3050 | 0.5262 |
| 0.2893 | 1.0568 | 3100 | 0.5266 |
| 0.461 | 1.0738 | 3150 | 0.5273 |
| 0.3648 | 1.0908 | 3200 | 0.5230 |
| 0.4981 | 1.1079 | 3250 | 0.5253 |
| 0.5005 | 1.1249 | 3300 | 0.5222 |
| 0.4117 | 1.1420 | 3350 | 0.5217 |
| 0.3319 | 1.1590 | 3400 | 0.5188 |
| 0.2549 | 1.1761 | 3450 | 0.5190 |
| 0.3758 | 1.1931 | 3500 | 0.5186 |
| 0.2889 | 1.2102 | 3550 | 0.5173 |
| 0.6341 | 1.2272 | 3600 | 0.5167 |
| 0.3217 | 1.2442 | 3650 | 0.5155 |
| 0.4406 | 1.2613 | 3700 | 0.5150 |
| 0.7445 | 1.2783 | 3750 | 0.5148 |
| 0.5511 | 1.2954 | 3800 | 0.5133 |
| 0.3933 | 1.3124 | 3850 | 0.5125 |
| 0.39 | 1.3295 | 3900 | 0.5134 |
| 0.3015 | 1.3465 | 3950 | 0.5126 |
| 0.8124 | 1.3636 | 4000 | 0.5118 |
| 0.6512 | 1.3806 | 4050 | 0.5111 |
| 0.7011 | 1.3976 | 4100 | 0.5106 |
| 0.4556 | 1.4147 | 4150 | 0.5103 |
| 0.4563 | 1.4317 | 4200 | 0.5100 |
| 0.2651 | 1.4488 | 4250 | 0.5100 |
| 0.5674 | 1.4658 | 4300 | 0.5090 |
| 0.2869 | 1.4829 | 4350 | 0.5093 |
| 0.5327 | 1.4999 | 4400 | 0.5088 |
| 0.726 | 1.5170 | 4450 | 0.5086 |
| 0.2619 | 1.5340 | 4500 | 0.5084 |
| 0.6597 | 1.5510 | 4550 | 0.5081 |
| 0.4848 | 1.5681 | 4600 | 0.5083 |
| 0.412 | 1.5851 | 4650 | 0.5080 |
| 0.6712 | 1.6022 | 4700 | 0.5077 |
| 0.5523 | 1.6192 | 4750 | 0.5076 |
| 0.5105 | 1.6363 | 4800 | 0.5077 |
| 0.5315 | 1.6533 | 4850 | 0.5071 |
| 0.4166 | 1.6704 | 4900 | 0.5069 |
| 0.4081 | 1.6874 | 4950 | 0.5065 |
| 0.3154 | 1.7044 | 5000 | 0.5063 |
| 0.396 | 1.7215 | 5050 | 0.5063 |
| 0.6121 | 1.7385 | 5100 | 0.5064 |
| 0.379 | 1.7556 | 5150 | 0.5063 |
| 0.4534 | 1.7726 | 5200 | 0.5061 |
| 0.5572 | 1.7897 | 5250 | 0.5060 |
| 0.3847 | 1.8067 | 5300 | 0.5059 |
| 0.3751 | 1.8238 | 5350 | 0.5060 |
| 0.4346 | 1.8408 | 5400 | 0.5061 |
| 0.4928 | 1.8578 | 5450 | 0.5061 |
| 0.5215 | 1.8749 | 5500 | 0.5060 |
| 0.6156 | 1.8919 | 5550 | 0.5060 |
| 0.4041 | 1.9090 | 5600 | 0.5060 |
| 0.5604 | 1.9260 | 5650 | 0.5059 |
| 0.424 | 1.9431 | 5700 | 0.5060 |
| 0.1856 | 1.9601 | 5750 | 0.5060 |
| 0.3701 | 1.9772 | 5800 | 0.5061 |
| 0.4201 | 1.9942 | 5850 | 0.5060 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.1.0+cu118
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mingxilei/distilbert-imdb | mingxilei | "2025-01-15T11:21:35Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T06:55:30Z" | ---
library_name: transformers
base_model: cardiffnlp/twitter-roberta-base-sentiment
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2327
- Accuracy: 0.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use sgd and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2752 | 1.0 | 196 | 0.2345 | 0.7420 |
| 0.199 | 2.0 | 392 | 0.2329 | 0.7666 |
| 0.1862 | 3.0 | 588 | 0.2327 | 0.7705 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
RohanHBTU/flan-t5-base-finetuned-frnet-325ct | RohanHBTU | "2024-07-05T20:48:28Z" | 20 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-05T13:28:42Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: flan-t5-base-finetuned-frnet-325ct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-frnet-325ct
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6486
- Bleu: 35.1673
- Gen Len: 98.1731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:--------:|
| 0.878 | 1.0 | 8918 | 0.7682 | 28.3905 | 110.488 |
| 0.8594 | 2.0 | 17836 | 0.6753 | 33.6103 | 101.0089 |
| 0.7192 | 3.0 | 26754 | 0.6486 | 35.1673 | 98.1731 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Th3BossC/contradictions_model | Th3BossC | "2024-03-29T17:22:03Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-29T16:18:10Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: contradictions_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# contradictions_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0973
- Accuracy: 0.3490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1191 | 0.07 | 100 | 1.1001 | 0.3177 |
| 1.1041 | 0.15 | 200 | 1.0959 | 0.3490 |
| 1.1081 | 0.22 | 300 | 1.0927 | 0.3993 |
| 1.1031 | 0.29 | 400 | 1.1143 | 0.3350 |
| 1.0855 | 0.37 | 500 | 1.0973 | 0.3490 |
| 1.0788 | 0.44 | 600 | 1.1068 | 0.3490 |
| 1.1029 | 0.51 | 700 | 1.0978 | 0.3490 |
| 1.1018 | 0.59 | 800 | 1.1049 | 0.3020 |
| 1.0983 | 0.66 | 900 | 1.1168 | 0.3267 |
| 1.1094 | 0.73 | 1000 | 1.1011 | 0.3020 |
| 1.0866 | 0.81 | 1100 | 1.1168 | 0.3020 |
| 1.1286 | 0.88 | 1200 | 1.1051 | 0.3020 |
| 1.1128 | 0.95 | 1300 | 1.1016 | 0.3490 |
| 1.1194 | 1.03 | 1400 | 1.0978 | 0.3490 |
| 1.0899 | 1.1 | 1500 | 1.1028 | 0.3490 |
| 1.0948 | 1.17 | 1600 | 1.0976 | 0.3490 |
| 1.1061 | 1.25 | 1700 | 1.0975 | 0.3490 |
| 1.0964 | 1.32 | 1800 | 1.1016 | 0.3020 |
| 1.1117 | 1.39 | 1900 | 1.0989 | 0.3490 |
| 1.1053 | 1.47 | 2000 | 1.1013 | 0.3020 |
| 1.0966 | 1.54 | 2100 | 1.0979 | 0.3490 |
| 1.1037 | 1.61 | 2200 | 1.1007 | 0.3490 |
| 1.1102 | 1.69 | 2300 | 1.0984 | 0.3490 |
| 1.1029 | 1.76 | 2400 | 1.0979 | 0.3490 |
| 1.095 | 1.83 | 2500 | 1.0975 | 0.3490 |
| 1.0942 | 1.91 | 2600 | 1.0973 | 0.3490 |
| 1.0962 | 1.98 | 2700 | 1.0973 | 0.3490 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
KappaNeuro/victor-moscoso-style | KappaNeuro | "2023-09-14T11:19:09Z" | 5 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"style",
"artist",
"painting",
"scene",
"victor moscoso",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T11:19:05Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- style
- artist
- painting
- scene
- victor moscoso
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Victor Moscoso Style
widget:
- text: "Victor Moscoso Style - an entertainment poster bill from the sixties or seventies in psychedelic art nouveau, muted undertoned halftone risographic retrograded halfway finished, nearly not, naturalistic borderline and VacantBliss center, byBuruj style nonsensical noise"
- text: "Victor Moscoso Style - /tomorrow dreams of the future, putting fragments together, desperate elements of strange entities, in the style of 1970s Soviet cartoons"
- text: "Victor Moscoso Style - multidimensional portal,lounge, geometrical, surreal film scene, unnatural, technicolor, 1970s, funhouse, SMC Takumar 35mm f/ 2. 8 c 50"
- text: "Victor Moscoso Style - snowspiria cartoon character comic underground comix grunge punk psychedelic pop surrealism flat 2d minimalist design jim woodring"
- text: "Victor Moscoso Style - pop art deco nouveau, flat 2d vector design, Bald dj high priest by Norman Saunders, lisa frank, James Gilleard and barry moser -"
- text: "Victor Moscoso Style - weird dreamcore scene, psychedelic early 70s, schizoid hallucination monsters with Dario Argento style"
- text: "Victor Moscoso Style - young husband and wife musical duo, felt embroidered fuzzy organic patterns quilted formal wear, 1977"
- text: "Victor Moscoso Style - Republic of Brazil. 1980s era USSR style propaganda. Eerie avant-garde motif"
- text: "Victor Moscoso Style - the big five san francisco poster artists victor moscoso Zap comix"
- text: "Victor Moscoso Style - psychedelics LSD experiential in the style of Pierre Cardin"
---
# Victor Moscoso Style ([CivitAI](https://civitai.com/models/107098))

> Victor Moscoso Style - an entertainment poster bill from the sixties or seventies in psychedelic art nouveau, muted undertoned halftone risographic retrograded halfway finished, nearly not, naturalistic borderline and VacantBliss center, byBuruj style nonsensical noise
<p>Victor Moscoso is an American artist and one of the pioneers of the psychedelic art movement. Born in 1936, Moscoso emerged as a prominent figure in the counterculture scene of the 1960s, particularly in San Francisco, where he became known for his vibrant and mind-altering poster designs.</p><p>Moscoso's artwork is characterized by its bold use of color, psychedelic patterns, and optical illusions. He was known for his innovative approach to typography, experimenting with distorted letterforms and visual effects to create a sense of movement and visual intensity.</p><p>His poster designs often incorporated imagery inspired by popular culture, music, and social and political issues of the time. Moscoso's work was not only visually striking but also communicated a sense of rebellion and a desire for societal transformation.</p><p>In addition to his poster art, Moscoso also ventured into other artistic mediums, including painting and comic book illustration. His paintings often carried the same psychedelic aesthetic, with swirling forms and vibrant colors.</p><p>Moscoso's contributions to the psychedelic art movement have left an indelible mark on the art world. His distinctive style and ability to capture the spirit of the counterculture era have made him an influential figure, inspiring subsequent generations of artists and continuing to resonate with audiences who appreciate the visual and cultural significance of his work.</p>
## Image examples for the model:

> Victor Moscoso Style - /tomorrow dreams of the future, putting fragments together, desperate elements of strange entities, in the style of 1970s Soviet cartoons

> Victor Moscoso Style - multidimensional portal,lounge, geometrical, surreal film scene, unnatural, technicolor, 1970s, funhouse, SMC Takumar 35mm f/ 2. 8 c 50

> Victor Moscoso Style - snowspiria cartoon character comic underground comix grunge punk psychedelic pop surrealism flat 2d minimalist design jim woodring

> Victor Moscoso Style - pop art deco nouveau, flat 2d vector design, Bald dj high priest by Norman Saunders, lisa frank, James Gilleard and barry moser -

> Victor Moscoso Style - weird dreamcore scene, psychedelic early 70s, schizoid hallucination monsters with Dario Argento style

> Victor Moscoso Style - young husband and wife musical duo, felt embroidered fuzzy organic patterns quilted formal wear, 1977

> Victor Moscoso Style - Republic of Brazil. 1980s era USSR style propaganda. Eerie avant-garde motif

> Victor Moscoso Style - the big five san francisco poster artists victor moscoso Zap comix

> Victor Moscoso Style - psychedelics LSD experiential in the style of Pierre Cardin
|
DatTran0509/Finetune_mBERT_QA | DatTran0509 | "2025-04-03T21:57:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2025-04-03T13:48:54Z" | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Finetune_mBERT_QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune_mBERT_QA
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6678
- Exact: 36.3136
- F1: 40.2149
- Total: 3814
- Hasans Exact: 8.4433
- Hasans F1: 14.0519
- Hasans Total: 2653
- Noans Exact: 100.0
- Noans F1: 100.0
- Noans Total: 1161
- Best Exact: 36.3136
- Best Exact Thresh: 0.0
- Best F1: 40.2149
- Best F1 Thresh: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 2048
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Noans Exact | Noans F1 | Noans Total | Best Exact | Best Exact Thresh | Best F1 | Best F1 Thresh |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:-----------:|:--------:|:-----------:|:----------:|:-----------------:|:-------:|:--------------:|
| No log | 0.9412 | 14 | 3.5600 | 30.4405 | 31.8958 | 3814 | 0.0 | 2.0922 | 2653 | 100.0 | 100.0 | 1161 | 30.4405 | 0.0 | 31.8958 | 0.0 |
| No log | 1.9412 | 28 | 2.4854 | 31.0435 | 32.9177 | 3814 | 0.8669 | 3.5612 | 2653 | 100.0 | 100.0 | 1161 | 31.0435 | 0.0 | 32.9177 | 0.0 |
| No log | 2.9412 | 42 | 2.1689 | 32.5380 | 35.4782 | 3814 | 3.0155 | 7.2423 | 2653 | 100.0 | 100.0 | 1161 | 32.5380 | 0.0 | 35.4782 | 0.0 |
| 3.1974 | 3.9412 | 56 | 1.9668 | 33.9276 | 37.1889 | 3814 | 5.0132 | 9.7016 | 2653 | 100.0 | 100.0 | 1161 | 33.9276 | 0.0 | 37.1889 | 0.0 |
| 3.1974 | 4.9412 | 70 | 1.8414 | 34.9764 | 38.4015 | 3814 | 6.5209 | 11.4449 | 2653 | 100.0 | 100.0 | 1161 | 34.9764 | 0.0 | 38.4015 | 0.0 |
| 3.1974 | 5.9412 | 84 | 1.7441 | 35.2910 | 38.4417 | 3814 | 6.9732 | 11.5027 | 2653 | 100.0 | 100.0 | 1161 | 35.2910 | 0.0 | 38.4417 | 0.0 |
| 3.1974 | 6.9412 | 98 | 1.7150 | 36.2611 | 40.1966 | 3814 | 8.3679 | 14.0256 | 2653 | 100.0 | 100.0 | 1161 | 36.2611 | 0.0 | 40.1966 | 0.0 |
| 1.759 | 7.9412 | 112 | 1.6887 | 36.4709 | 40.4782 | 3814 | 8.6694 | 14.4304 | 2653 | 100.0 | 100.0 | 1161 | 36.4709 | 0.0 | 40.4782 | 0.0 |
| 1.759 | 8.9412 | 126 | 1.6686 | 36.1563 | 39.8798 | 3814 | 8.2171 | 13.5701 | 2653 | 100.0 | 100.0 | 1161 | 36.1563 | 0.0 | 39.8798 | 0.0 |
| 1.759 | 9.9412 | 140 | 1.6678 | 36.3136 | 40.2149 | 3814 | 8.4433 | 14.0519 | 2653 | 100.0 | 100.0 | 1161 | 36.3136 | 0.0 | 40.2149 | 0.0 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
epsil/sd-class-butterflies-64 | epsil | "2022-11-29T18:13:23Z" | 5 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2022-11-29T18:13:12Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(epsil/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
KoelLabs/xlsr-timit-a0 | KoelLabs | "2024-12-23T21:12:46Z" | 7 | 1 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mpl-2.0",
"region:us"
] | automatic-speech-recognition | "2024-12-01T22:32:14Z" | ---
base_model:
- ginic/hyperparam_tuning_1_wav2vec2-large-xlsr-buckeye-ipa
language:
- en
license: mpl-2.0
metrics:
- cer
pipeline_tag: automatic-speech-recognition
---
# XLSR-TIMIT-B0: Fine-tuned on TIMIT for Phonemic Transcription
This model leverages the pretrained checkpoint [ginic/hyperparam_tuning_1_wav2vec2-large-xlsr-buckeye-ipa](https://huggingface.co/ginic/data_seed_4_wav2vec2-large-xlsr-buckeye-ipa) and is fine-tuned on the [TIMIT Darpa English Corpus](https://github.com/philipperemy/timit) to transcribe audio into phonemic representations for the English language.
**Performance**
- Training Loss: 4.73
- Validation Loss: 1.048
- Test Results (TIMIT test set):
- Average Weighted Distance: 18.06
- Standard Deviation (Weighted Distance): 12.9
- Average Character Error Rate (CER): 0.14
- Standard Deviation (CER): 0.07
**Model Information**
- Number of Epochs: 40
- Learning Rate: 5e-6
- Optimizer: Adam
- Datasets Used: TIMIT, Darpa English Corpus
**Example Outputs**
1. **Prediction**: `lizteɪkðɪsdɹɾiteɪbklɔθiðiklinizfɹmi`
**Ground Truth**: `lizteɪkðɪsdɹɾiteɪbəklɔtiðiklinizfɹmi`
**Weighted Feature Edit Distance**: 7.875
**CER**: 0.0556
2. **Prediction**: `ɹænmʌðɹʔaʊtɹuhɹʔʌpɹɪŋiɾimpɛɾikoʊts`
**Ground Truth**: `ɹænmʌðɹʔaʊtɹuhɹʔʌpɹɪŋiŋinpɛɾikoʊts`
**Weighted Feature Edit Distance**: 2.375
**CER**: 0.0588
## Limitations
This phonemic transcription model is fine-tuned on an English speech corpus that does not encompass all dialects and languages. We acknowledge that it may significantly underperform for any unseen languages. We aim to release models and datasets that better serve all populations and languages in the future.
---
# Usage
To transcribe audio files, this model can be used as follows:
```python
from transformers import AutoModelForCTC, AutoProcessor
import torch
# Load model and processor
model = AutoModelForCTC.from_pretrained("KoelLabs/xlsr-timit-b0")
processor = AutoProcessor.from_pretrained("KoelLabs/xlsr-timit-b0")
# Prepare input
audio_input = "path_to_your_audio_file.wav" # Replace with your file
input_values = processor(audio_input, return_tensors="pt", sampling_rate=16000).input_values
# Retrieve logits
with torch.no_grad():
logits = model(input_values).logits
# Decode predictions
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription) |
FrancescoPeriti/Llama2Dictionary | FrancescoPeriti | "2024-12-06T12:43:07Z" | 16 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"text2text-generation",
"en",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-24T13:14:40Z" | ---
license: cc-by-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
tags:
- text-generation-inference
base_model:
- meta-llama/Llama-2-7b-chat-hf
---
# Llama2Dictionary
<!-- Provide a quick summary of what the model is/does. -->
```FrancescoPeriti/Llama2Dictionary``` is a fine-tuned version of the ```meta-llama/Llama-2-7b-chat-hf```.
Thus, to use it, visit the AI at Meta website, accept the Meta License, and submit the [form](https://llama.meta.com/llama-downloads/).
You will need to login with your hugginface token (```[HF-TOKEN]```, in the following).
### Model Description
This model is fine-tuned on English datasets of sense definitions. Given a target word and a usage example, the model generates a sense definition for the target word in-context.
You can find more details in the paper [Automatically Generated Definitions and their utility for Modeling Word Meaning](https://aclanthology.org/2024.emnlp-main.776/) by Francesco Periti, David Alfter, Nina Tahmasebi.
The repository of our project is [https://github.com/FrancescoPeriti/LlamaDictionary](https://github.com/FrancescoPeriti/LlamaDictionary).
## Uses
The model is designed for research purposes and is conceived to work like a dictionary.
However, given a word and an example usage, users don't choose from a list of definitions (as in a traditional dictionary); instead, the model directly provides the sense definition for the word in-context.
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- ### Downstream Use [optional]-->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
## Bias, Risks, and Limitations
The fine-tuning datasets were limited to English, and generated definitions may reflect biases and stereotypes inherent in the underlying language model.
## How to Get Started with the Model
```python
import torch
import warnings
from peft import PeftModel # parameter-efficient fine-tuning
from datasets import Dataset
from huggingface_hub import login
from typing import (Literal, Sequence,TypedDict)
from transformers import AutoTokenizer, AutoModelForCausalLM
login([HF-TOKEN]) # e.g., hf_aGPI...ELal
model_name = "meta-llama/Llama-2-7b-chat-hf" # chat model
ft_model_name = "FrancescoPeriti/Llama2Dictionary" # fine-tuned model
# load models
chat_model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
lama2dictionary = PeftModel.from_pretrained(chat_model, ft_model_name)
lama2dictionary.eval()
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
model_name,
padding_side="left",
add_eos_token=True,
add_bos_token=True,
)
tokenizer.pad_token = tokenizer.eos_token
# end of sequence for stop condition
eos_tokens = [tokenizer.encode(token, add_special_tokens=False)[0]
for token in [';', ' ;', '.', ' .']]
eos_tokens.append(tokenizer.eos_token_id)
# chat format
Role = Literal["system", "user"]
class Message(TypedDict):
role: Role
content: str
Dialog = Sequence[Message]
# load dataset
examples = [{'target': 'jam', 'example': 'The traffic jam on the highway made everyone late for work.'},
{'target': 'jam', 'example': 'I spread a generous layer of strawberry jam on my toast this morning'}]
dataset = Dataset.from_list(examples)
# apply template
def apply_chat_template(tokenizer, dataset):
system_message = "You are a lexicographer familiar with providing concise definitions of word meanings."
template = 'Please provide a concise definition for the meaning of the word "{}" in the following sentence: {}'
def apply_chat_template_func(record):
dialog: Dialog = (Message(role='system', content=system_message),
Message(role='user', content=template.format(record['target'], record['example'])))
prompt = tokenizer.decode(tokenizer.apply_chat_template(dialog, add_generation_prompt=True))
return {'text': prompt}
return dataset.map(apply_chat_template_func)
dataset = apply_chat_template(tokenizer, dataset)
# tokenization
max_length = 512
def formatting_func(record):
return record['text']
def tokenization(dataset):
result = tokenizer(formatting_func(dataset),
truncation=True,
max_length=max_length,
padding="max_length",
add_special_tokens=False)
return result
tokenized_dataset = dataset.map(tokenization)
# definition generation
batch_size = 32
max_time = 4.5 # sec
sense_definitions = list()
with torch.no_grad():
for i in range(0, len(tokenized_dataset), batch_size):
batch = tokenized_dataset[i:i + batch_size]
model_input = dict()
for k in ['input_ids', 'attention_mask']:
model_input[k] = torch.tensor(batch[k]).to('cuda')
output_ids = lama2dictionary.generate(**model_input,
max_length = max_length,
forced_eos_token_id = eos_tokens,
max_time = max_time * batch_size,
eos_token_id = eos_tokens,
temperature = 0.00001,
pad_token_id = tokenizer.eos_token_id)
answers = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
for j, answer in enumerate(answers):
answer = answer.split('[/INST]')[-1].strip(" .,;:")
if 'SYS>>' in answer:
answer=''
warnings.warn("Something went wrong. The input example might be too long; try reducing it.")
sense_definitions.append(answer.replace('\n', ' ') + '\n')
# output
dataset = dataset.add_column('definition', sense_definitions)
for row in dataset:
print(f"Target: {row['target']}\nExample: {row['example']}\nSense definition: {row['definition']}")
```
## Citation
Francesco Periti, David Alfter, and Nina Tahmasebi. 2024. [Automatically Generated Definitions and their utility for Modeling Word Meaning](https://aclanthology.org/2024.emnlp-main.776/). In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 14008–14026, Miami, Florida, USA. Association for Computational Linguistics.
**BibTeX:**
```
@inproceedings{periti2024automatically,
title = {{Automatically Generated Definitions and their utility for Modeling Word Meaning}},
author = "Periti, Francesco and Alfter, David and Tahmasebi, Nina",
editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.776",
pages = "14008--14026",
abstract = "Modeling lexical semantics is a challenging task, often suffering from interpretability pitfalls. In this paper, we delve into the generation of dictionary-like sense definitions and explore their utility for modeling word meaning. We fine-tuned two Llama models and include an existing T5-based model in our evaluation. Firstly, we evaluate the quality of the generated definitions on existing English benchmarks, setting new state-of-the-art results for the Definition Generation task. Next, we explore the use of definitions generated by our models as intermediate representations subsequently encoded as sentence embeddings. We evaluate this approach on lexical semantics tasks such as the Word-in-Context, Word Sense Induction, and Lexical Semantic Change, setting new state-of-the-art results in all three tasks when compared to unsupervised baselines.",
}
``` |
Vasi001/whisper-small | Vasi001 | "2022-12-10T23:32:04Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-10T21:57:53Z" | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
PrunaAI/NESPED-GEN-StableCode-text2SQL-withoutquantization-5epoch-bnb-8bit-smashed | PrunaAI | "2025-01-08T15:18:07Z" | 5 | 0 | null | [
"safetensors",
"stablelm",
"pruna-ai",
"base_model:NESPED-GEN/StableCode-text2SQL-withoutquantization-5epoch",
"base_model:quantized:NESPED-GEN/StableCode-text2SQL-withoutquantization-5epoch",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-08T15:15:07Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: NESPED-GEN/StableCode-text2SQL-withoutquantization-5epoch
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NESPED-GEN/StableCode-text2SQL-withoutquantization-5epoch installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/NESPED-GEN-StableCode-text2SQL-withoutquantization-5epoch-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NESPED-GEN/StableCode-text2SQL-withoutquantization-5epoch")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NESPED-GEN/StableCode-text2SQL-withoutquantization-5epoch before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
lesso/b5953ab8-8ff3-4f8e-b577-d743ccfda01a | lesso | "2025-02-05T21:09:37Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T21:03:52Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5953ab8-8ff3-4f8e-b577-d743ccfda01a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 283c4184083d47ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/283c4184083d47ae_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/b5953ab8-8ff3-4f8e-b577-d743ccfda01a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00010017
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/G.O.D/283c4184083d47ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a0ab5165-7ac1-4405-b1f9-20e99af02244
wandb_project: new-17
wandb_run: your_name
wandb_runid: a0ab5165-7ac1-4405-b1f9-20e99af02244
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b5953ab8-8ff3-4f8e-b577-d743ccfda01a
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00010017
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3257 | 0.0003 | 1 | 1.8642 |
| 2.1441 | 0.0170 | 50 | 1.4145 |
| 1.7458 | 0.0340 | 100 | 1.2740 |
| 1.5074 | 0.0509 | 150 | 1.2463 |
| 1.5048 | 0.0679 | 200 | 1.2382 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
brandtcormorant/CodeRankEmbed-Q4_K_M-GGUF | brandtcormorant | "2025-04-14T22:17:52Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:nomic-ai/CodeRankEmbed",
"base_model:quantized:nomic-ai/CodeRankEmbed",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | "2025-04-14T22:17:37Z" | ---
base_model: nomic-ai/CodeRankEmbed
library_name: sentence-transformers
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# brandtcormorant/CodeRankEmbed-Q4_K_M-GGUF
This model was converted to GGUF format from [`nomic-ai/CodeRankEmbed`](https://huggingface.co/nomic-ai/CodeRankEmbed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nomic-ai/CodeRankEmbed) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo brandtcormorant/CodeRankEmbed-Q4_K_M-GGUF --hf-file coderankembed-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo brandtcormorant/CodeRankEmbed-Q4_K_M-GGUF --hf-file coderankembed-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo brandtcormorant/CodeRankEmbed-Q4_K_M-GGUF --hf-file coderankembed-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo brandtcormorant/CodeRankEmbed-Q4_K_M-GGUF --hf-file coderankembed-q4_k_m.gguf -c 2048
```
|
systemk/gemma-3-27b-ja | systemk | "2025-04-08T06:50:18Z" | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-27b-it",
"base_model:finetune:unsloth/gemma-3-27b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T06:12:43Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mayurbante85/lorapony-mar27 | mayurbante85 | "2025-03-27T13:55:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-27T09:42:14Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>We had to rate limit you. To continue using our service, please log in or create an account.</p>
</div>
</main>
</body>
</html> |
end000/gemma-3-12b-it-Q4_K_M-GGUF | end000 | "2025-03-13T16:28:33Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | "2025-03-13T16:27:52Z" | ---
base_model: google/gemma-3-12b-it
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# end000/gemma-3-12b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-12b-it`](https://huggingface.co/google/gemma-3-12b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo end000/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo end000/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo end000/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo end000/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048
```
|
jimmycarter/flux-training-losercity-next-tests | jimmycarter | "2024-08-19T19:09:44Z" | 31 | 1 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-08-18T19:55:10Z" | ---
license: creativeml-openrail-m
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'loona from helluva boss is eating a donut'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
---
# flux-training-losercity-next-tests
This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
loona from helluva boss is eating a donut
```
## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `15`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 36
- Training steps: 3000
- Learning rate: 0.0002
- Effective batch size: 4
- Micro-batch size: 4
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: bf16
- Quantised: No
- Xformers: Not used
- LoRA Rank: 32
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### default_dataset
- Repeats: 0
- Total number of images: 42
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
### default_dataset_512
- Repeats: 0
- Total number of images: 42
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
### default_dataset_768
- Repeats: 0
- Total number of images: 42
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'jimmycarter/flux-training-losercity-next-tests'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "loona from helluva boss is eating a donut"
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=15,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
```
|
MaziyarPanahi/IceMartiniV1RP-7b-GGUF | MaziyarPanahi | "2024-11-01T00:28:04Z" | 37 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:icefog72/IceMartiniV1RP-7b",
"base_model:quantized:icefog72/IceMartiniV1RP-7b",
"region:us",
"conversational"
] | text-generation | "2024-11-01T00:05:39Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: IceMartiniV1RP-7b-GGUF
base_model: icefog72/IceMartiniV1RP-7b
inference: false
model_creator: icefog72
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/IceMartiniV1RP-7b-GGUF](https://huggingface.co/MaziyarPanahi/IceMartiniV1RP-7b-GGUF)
- Model creator: [icefog72](https://huggingface.co/icefog72)
- Original model: [icefog72/IceMartiniV1RP-7b](https://huggingface.co/icefog72/IceMartiniV1RP-7b)
## Description
[MaziyarPanahi/IceMartiniV1RP-7b-GGUF](https://huggingface.co/MaziyarPanahi/IceMartiniV1RP-7b-GGUF) contains GGUF format model files for [icefog72/IceMartiniV1RP-7b](https://huggingface.co/icefog72/IceMartiniV1RP-7b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
mradermacher/Codestral-22B-v0.1-GGUF | mradermacher | "2024-09-11T16:10:11Z" | 195 | 0 | transformers | [
"transformers",
"gguf",
"code",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-11T14:52:30Z" | ---
base_model: mistralai/Codestral-22B-v0.1
language:
- code
library_name: transformers
license: other
license_link: https://mistral.ai/licences/MNPL-0.1.md
license_name: mnpl
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mistralai/Codestral-22B-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Codestral-22B-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.IQ3_XS.gguf) | IQ3_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.IQ3_S.gguf) | IQ3_S | 9.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.IQ3_M.gguf) | IQ3_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-GGUF/resolve/main/Codestral-22B-v0.1.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LevonHakobyan/head_l23_cos_anneal_2 | LevonHakobyan | "2024-07-07T22:29:46Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-07T16:44:27Z" | ---
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: head_l23_cos_anneal_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/levonhakobyan7-USC/huggingface/runs/nmv5e24y)
# head_l23_cos_anneal_2
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.2120
- eval_wer: 0.9981
- eval_cer: 0.4183
- eval_runtime: 72.4648
- eval_samples_per_second: 59.077
- eval_steps_per_second: 7.397
- epoch: 104.6154
- step: 34000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 154
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Starxx/LLaMa3-Fine-Tuning-Law-GGUF | Starxx | "2024-05-04T10:15:10Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-04T10:12:42Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Starxx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bradoc/ner-bert-large-cased-pt-lenerbr-finetuned-ner | bradoc | "2023-12-11T21:16:24Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:contratos_tceal",
"base_model:pierreguillou/ner-bert-large-cased-pt-lenerbr",
"base_model:finetune:pierreguillou/ner-bert-large-cased-pt-lenerbr",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-11T21:15:27Z" | ---
base_model: pierreguillou/ner-bert-large-cased-pt-lenerbr
tags:
- generated_from_trainer
datasets:
- contratos_tceal
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-bert-large-cased-pt-lenerbr-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: contratos_tceal
type: contratos_tceal
config: contratos_tceal
split: validation
args: contratos_tceal
metrics:
- name: Precision
type: precision
value: 0.7549019607843137
- name: Recall
type: recall
value: 0.8115313081215128
- name: F1
type: f1
value: 0.7821930086644756
- name: Accuracy
type: accuracy
value: 0.883160638230246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bert-large-cased-pt-lenerbr-finetuned-ner
This model is a fine-tuned version of [pierreguillou/ner-bert-large-cased-pt-lenerbr](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr) on the contratos_tceal dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.7549
- Recall: 0.8115
- F1: 0.7822
- Accuracy: 0.8832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 91 | nan | 0.6987 | 0.7433 | 0.7203 | 0.8620 |
| No log | 2.0 | 182 | nan | 0.7040 | 0.7564 | 0.7292 | 0.8624 |
| No log | 3.0 | 273 | nan | 0.7317 | 0.7929 | 0.7611 | 0.8731 |
| No log | 4.0 | 364 | nan | 0.7501 | 0.8097 | 0.7788 | 0.8838 |
| No log | 5.0 | 455 | nan | 0.7504 | 0.8332 | 0.7897 | 0.8857 |
| 0.3495 | 6.0 | 546 | nan | 0.7551 | 0.8103 | 0.7817 | 0.8799 |
| 0.3495 | 7.0 | 637 | nan | 0.7533 | 0.8215 | 0.7859 | 0.8824 |
| 0.3495 | 8.0 | 728 | nan | 0.7578 | 0.7991 | 0.7779 | 0.8785 |
| 0.3495 | 9.0 | 819 | nan | 0.7520 | 0.8196 | 0.7843 | 0.8840 |
| 0.3495 | 10.0 | 910 | nan | 0.7549 | 0.8115 | 0.7822 | 0.8832 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mlfoundations-dev/1k_globalbatchsize32_lr2e5_epochs9 | mlfoundations-dev | "2025-03-26T05:27:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-26T00:27:11Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 1k_globalbatchsize32_lr2e5_epochs9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1k_globalbatchsize32_lr2e5_epochs9
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts_1000 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 9.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Hamzaabbas77/FINAL-GPT2 | Hamzaabbas77 | "2023-09-01T07:35:38Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-01T07:35:36Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
pablouribe/xls-r-ab-test | pablouribe | "2022-01-30T05:13:34Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.2596
- Wer: 19.1571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
mradermacher/Boomer_Qwen_72B-i1-GGUF | mradermacher | "2025-03-05T07:28:01Z" | 354 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SicariusSicariiStuff/Boomer_Qwen_72B",
"base_model:quantized:SicariusSicariiStuff/Boomer_Qwen_72B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-04T02:41:18Z" | ---
base_model: SicariusSicariiStuff/Boomer_Qwen_72B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SicariusSicariiStuff/Boomer_Qwen_72B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Boomer_Qwen_72B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q4_1.gguf) | i1-Q4_1 | 45.8 | |
| [GGUF](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Boomer_Qwen_72B-i1-GGUF/resolve/main/Boomer_Qwen_72B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
beratcmn/sa-moj-llama-2-7b-v0.2-5e | beratcmn | "2023-09-19T13:32:48Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-18T21:13:52Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
Xenopilus/mega-base-multiple-choice-fp16-v3 | Xenopilus | "2024-01-17T15:02:48Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mega",
"multiple-choice",
"generated_from_trainer",
"base_model:mnaylor/mega-base-wikitext",
"base_model:finetune:mnaylor/mega-base-wikitext",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-01-17T15:01:13Z" | ---
license: apache-2.0
base_model: mnaylor/mega-base-wikitext
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: mega-base-multiple-choice-fp16-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mega-base-multiple-choice-fp16-v3
This model is a fine-tuned version of [mnaylor/mega-base-wikitext](https://huggingface.co/mnaylor/mega-base-wikitext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.4974
- Precision: 0.4974
- Recall: 0.5020
- F1: 0.4997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 34 | 0.6932 | 0.4970 | 0.4971 | 0.5023 | 0.4997 |
| No log | 2.0 | 68 | 0.6932 | 0.4975 | 0.4975 | 0.5026 | 0.5001 |
| No log | 3.0 | 102 | 0.6932 | 0.4974 | 0.4974 | 0.5020 | 0.4997 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
bowilleatyou/600f72fb-9c7e-4a5e-8d15-e94bdafcb57b | bowilleatyou | "2025-04-07T11:45:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T06:30:43Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fedecba007/SmolLM2-FT-MyDataset | fedecba007 | "2025-01-29T03:13:16Z" | 25 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-29T03:12:49Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fedecba007/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/voludeces22-mcdonald-s/huggingface/runs/xan4gp0i)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
software-mansion/react-native-executorch-efficientnet-v2-s | software-mansion | "2024-12-17T10:39:05Z" | 12 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-12-17T09:32:05Z" | ---
license: other
license_name: apache-license-2.0
license_link: https://github.com/google/automl/blob/master/LICENSE
---
# Introduction
This repository hosts the [efficientnet_v2_s](https://pytorch.org/vision/0.20/models/generated/torchvision.models.efficientnet_v2_s.html#torchvision.models.efficientnet_v2_s) models for the [React Native ExecuTorch](https://www.npmjs.com/package/react-native-executorch) library. It includes model exported for xnnpack, as well as coreml in `.pte` format, ready for use in the **ExecuTorch** runtime.
If you'd like to run these models in your own ExecuTorch runtime, refer to the [official documentation](https://pytorch.org/executorch/stable/index.html) for setup instructions.
## Compatibility
If you intend to use this models outside of React Native ExecuTorch, make sure your runtime is compatible with the **ExecuTorch** version used to export the `.pte` files. For more details, see the compatibility note in the [ExecuTorch GitHub repository](https://github.com/pytorch/executorch/blob/11d1742fdeddcf05bc30a6cfac321d2a2e3b6768/runtime/COMPATIBILITY.md?plain=1#L4). If you work with React Native ExecuTorch, the constants from the library will guarantee compatibility with runtime used behind the scenes.
These models were exported using commit `fe20be98c` and **no forward compatibility** is guaranteed. Older versions of the runtime may not work with these files.
### Repository Structure
The repository is organized into two main directories:
- `xnnpack`
- `coreml`
Each directory contains models exported for the respective backend.
- The `.pte` file should be passed to the `modelSource` parameter.
|
HachiML/mistral_2x7b_v0.1 | HachiML | "2024-04-14T08:17:42Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mixture of experts",
"moe",
"merge",
"mergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"nvidia/OpenMath-Mistral-7B-v0.1-hf",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:nvidia/OpenMath-Mistral-7B-v0.1-hf",
"base_model:merge:nvidia/OpenMath-Mistral-7B-v0.1-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-14T08:09:44Z" | ---
license: apache-2.0
tags:
- mixture of experts
- moe
- merge
- mergekit
- mistralai/Mistral-7B-Instruct-v0.2
- nvidia/OpenMath-Mistral-7B-v0.1-hf
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- nvidia/OpenMath-Mistral-7B-v0.1-hf
---
# mistral_2x7b_v0.1
mistral_2x7b_v0.1 is a Mixure of Experts (MoE) made with the following models using [mergekit-moe](https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [nvidia/OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf)
## 🧩 Configuration
```yamlbase_model: mistralai/Mistral-7B-v0.1
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts:
- "What are some fun activities to do in Seattle?"
- "What are the potential long-term economic impacts of raising the minimum wage?"
- source_model: nvidia/OpenMath-Mistral-7B-v0.1-hf
positive_prompts:
- "What is 27 * 49? Show your step-by-step work."
- "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "HachiML/mistral_2x7b_v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
sallywww/pp2inv | sallywww | "2024-04-02T18:42:35Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2024-04-02T18:25:18Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: True
- _load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
- bnb_4bit_quant_storage: uint8
- load_in_4bit: False
- load_in_8bit: True
### Framework versions
- PEFT 0.5.0
|
TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ | TheBloke | "2024-01-10T04:58:10Z" | 19 | 6 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES",
"base_model:quantized:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-01-10T02:33:43Z" | ---
base_model: Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES
inference: false
model_creator: Doctor Shotgun
model_name: Mixtral 8X7B Instruct V0.1 LimaRP ZLoss DARE TIES
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- mergekit
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mixtral 8X7B Instruct V0.1 LimaRP ZLoss DARE TIES - GPTQ
- Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [Mixtral 8X7B Instruct V0.1 LimaRP ZLoss DARE TIES](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Doctor Shotgun's Mixtral 8X7B Instruct V0.1 LimaRP ZLoss DARE TIES](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GGUF)
* [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ`:
```shell
mkdir Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ
huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --local-dir Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ
huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --local-dir Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Doctor Shotgun's Mixtral 8X7B Instruct V0.1 LimaRP ZLoss DARE TIES
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./extra_hdd/Mixtral-8x7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* ./extra_hdd2/Mixtral-8x7B-Instruct-v0.1
* ./extra_hdd/Mixtral-8x7B-v0.1-LimaRP-ZLoss
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./extra_hdd2/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.5
weight: 1.0
- model: ./extra_hdd/Mixtral-8x7B-v0.1-LimaRP-ZLoss
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: ./extra_hdd/Mixtral-8x7B-v0.1
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
```
|
TrishanuDas/sample_model_2 | TrishanuDas | "2025-03-31T15:52:42Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-31T15:52:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rodekruis/nlrc-pmer-midmat-labels | rodekruis | "2024-06-26T13:11:34Z" | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2024-06-26T13:11:00Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# wdejong/midmat_labels
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("wdejong/midmat_labels")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100 | hsohn3 | "2022-07-07T08:33:59Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-06T16:29:49Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/mayo-bert-visit-uncased-wordlevel-block512-batch4-ep100
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9559
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.1247 | 0 |
| 3.5129 | 1 |
| 3.4726 | 2 |
| 3.4483 | 3 |
| 3.4395 | 4 |
| 3.4301 | 5 |
| 3.4260 | 6 |
| 3.4131 | 7 |
| 3.3831 | 8 |
| 3.2925 | 9 |
| 3.2454 | 10 |
| 3.2092 | 11 |
| 3.1695 | 12 |
| 3.1346 | 13 |
| 3.0797 | 14 |
| 3.0154 | 15 |
| 2.9557 | 16 |
| 2.8814 | 17 |
| 2.7720 | 18 |
| 2.5472 | 19 |
| 2.3193 | 20 |
| 2.1005 | 21 |
| 1.9331 | 22 |
| 1.7971 | 23 |
| 1.6859 | 24 |
| 1.6062 | 25 |
| 1.5310 | 26 |
| 1.4706 | 27 |
| 1.4203 | 28 |
| 1.3681 | 29 |
| 1.3222 | 30 |
| 1.2939 | 31 |
| 1.2726 | 32 |
| 1.2494 | 33 |
| 1.2330 | 34 |
| 1.2161 | 35 |
| 1.1998 | 36 |
| 1.1874 | 37 |
| 1.1767 | 38 |
| 1.1641 | 39 |
| 1.1550 | 40 |
| 1.1407 | 41 |
| 1.1363 | 42 |
| 1.1272 | 43 |
| 1.1227 | 44 |
| 1.1163 | 45 |
| 1.1065 | 46 |
| 1.1008 | 47 |
| 1.0957 | 48 |
| 1.0837 | 49 |
| 1.0844 | 50 |
| 1.0778 | 51 |
| 1.0741 | 52 |
| 1.0693 | 53 |
| 1.0662 | 54 |
| 1.0608 | 55 |
| 1.0521 | 56 |
| 1.0526 | 57 |
| 1.0476 | 58 |
| 1.0454 | 59 |
| 1.0452 | 60 |
| 1.0348 | 61 |
| 1.0333 | 62 |
| 1.0342 | 63 |
| 1.0293 | 64 |
| 1.0249 | 65 |
| 1.0241 | 66 |
| 1.0194 | 67 |
| 1.0177 | 68 |
| 1.0102 | 69 |
| 1.0055 | 70 |
| 1.0052 | 71 |
| 1.0038 | 72 |
| 1.0005 | 73 |
| 0.9981 | 74 |
| 0.9991 | 75 |
| 0.9950 | 76 |
| 0.9928 | 77 |
| 0.9898 | 78 |
| 0.9906 | 79 |
| 0.9873 | 80 |
| 0.9849 | 81 |
| 0.9808 | 82 |
| 0.9804 | 83 |
| 0.9792 | 84 |
| 0.9789 | 85 |
| 0.9797 | 86 |
| 0.9741 | 87 |
| 0.9781 | 88 |
| 0.9678 | 89 |
| 0.9686 | 90 |
| 0.9651 | 91 |
| 0.9652 | 92 |
| 0.9613 | 93 |
| 0.9599 | 94 |
| 0.9566 | 95 |
| 0.9571 | 96 |
| 0.9577 | 97 |
| 0.9536 | 98 |
| 0.9559 | 99 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ctoraman/RoBERTa-TR-medium-wp-66k | ctoraman | "2022-04-20T07:01:39Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-09T09:15:04Z" | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium WordPiece 66k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 66.7k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
``` |
Nayyt/ultiima-32B-Q5_K_M-GGUF | Nayyt | "2025-02-03T05:07:00Z" | 20 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Sakalti/ultiima-32B",
"base_model:quantized:Sakalti/ultiima-32B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-03T05:05:08Z" | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: Sakalti/ultiima-32B
pipeline_tag: text-generation
inference: true
model-index:
- name: ultiima-32B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 68.54
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-32B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 58.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-32B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 43.13
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-32B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.45
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-32B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 24.13
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-32B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.56
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Sakalti/ultiima-32B
name: Open LLM Leaderboard
---
# Nayyt/ultiima-32B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Sakalti/ultiima-32B`](https://huggingface.co/Sakalti/ultiima-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sakalti/ultiima-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nayyt/ultiima-32B-Q5_K_M-GGUF --hf-file ultiima-32b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nayyt/ultiima-32B-Q5_K_M-GGUF --hf-file ultiima-32b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nayyt/ultiima-32B-Q5_K_M-GGUF --hf-file ultiima-32b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nayyt/ultiima-32B-Q5_K_M-GGUF --hf-file ultiima-32b-q5_k_m.gguf -c 2048
```
|
tomaszki/stablelm-53-a | tomaszki | "2024-05-08T10:43:25Z" | 131 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-08T10:42:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
judywq/llama-ft-gec | judywq | "2025-02-27T15:13:49Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:adapter:meta-llama/Llama-3.3-70B-Instruct",
"license:other",
"region:us"
] | null | "2025-02-27T14:21:01Z" | ---
library_name: peft
license: other
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: llama3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3
This model is a fine-tuned version of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on the grammar_train dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
ahalev/mcuu-table-2-0f8ud8aq | ahalev | "2024-06-19T06:10:38Z" | 3 | 0 | torch | [
"torch",
"table-2",
"en",
"license:mit",
"region:us"
] | null | "2024-06-19T06:10:36Z" | ---
language: en
library_name: torch
license: mit
tags:
- table-2
---
# Model Card for ahalev/mcuu-table-2-0f8ud8aq
This model corresponds to run(s) in Table 2, specifically that with the hyperparameters:
**1)** {'scenario': 1, 'forecast_horizon': 24, 'intrinsic_reward_weight': 0.0001, 'bound_reward_weight': 'cosine', 'noise_std': 0.01}
## Usage
```python
from trainer import Trainer
trainer = Trainer.from_pretrained('ahalev/mcuu-table-2-0f8ud8aq')
algo, env = trainer.algo, trainer.env
# Get an action from a random observation
action, _ = algo.policy.get_action(env.observation_space.sample())
# Evaluate the policy over 2920 timesteps
evaluation = trainer.evaluate()
```
For more information, see the [repo](https://github.com/ahalev/Microgrid-Control-Under-Uncertainty)
and the [paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4866653).
This model was created by [@ahalev](https://hf.co/ahalev). |
Irny/distilbert-base-uncased-finetuned-cola | Irny | "2024-10-02T05:11:59Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-02T05:04:55Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8314
- Matthews Correlation: 0.5365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5175 | 1.0 | 535 | 0.4564 | 0.4551 |
| 0.3468 | 2.0 | 1070 | 0.4703 | 0.5232 |
| 0.2335 | 3.0 | 1605 | 0.6587 | 0.4977 |
| 0.1768 | 4.0 | 2140 | 0.7969 | 0.5156 |
| 0.1309 | 5.0 | 2675 | 0.8314 | 0.5365 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
mradermacher/ZEUS-8B-V29-GGUF | mradermacher | "2025-02-03T00:20:13Z" | 292 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:T145/ZEUS-8B-V29",
"base_model:quantized:T145/ZEUS-8B-V29",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-02T21:54:30Z" | ---
base_model: T145/ZEUS-8B-V29
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/T145/ZEUS-8B-V29
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ZEUS-8B-V29-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V29-GGUF/resolve/main/ZEUS-8B-V29.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_4_torch.bfloat16_64_32_0.05_4_0.0002 | ferrazzipietro | "2024-02-16T11:44:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-16T11:44:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DataCanvas/MMAlaya | DataCanvas | "2024-02-01T06:50:42Z" | 33 | 1 | transformers | [
"transformers",
"pytorch",
"mmalaya",
"text-generation",
"image-to-text",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-to-text | "2024-01-23T06:20:11Z" | ---
license: apache-2.0
pipeline_tag: image-to-text
---
# MMAlaya
[MMAlaya](https://github.com/DataCanvasIO/MMAlaya/)是基于大语言模型[Alaya](https://github.com/DataCanvasIO/Alaya)的多模态模型,模型权重文件在[DataCanvas/MMAlaya](https://huggingface.co/DataCanvas/MMAlaya/tree/main)
MMAlaya包含以下三个模块:
<br>1,大语言模型[Alaya-7B-Chat](https://huggingface.co/DataCanvas/Alaya-7B-Chat)。
<br>2,图像文本特征编码器来自[blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b)的EVA-G。
<br>3,图像文本特征到大预言模型的连接器,使用的是来自[blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b)的Qformer和线性投影器。
模型的训练主要基于[LLaVA](https://github.com/haotian-liu/LLaVA)架构
OpenCompass 评测榜单,均分41.1,排名25名。
<br>MMBench 评测榜单,开源开放的模型,中文测试集,均分58.6,排名25名。
推理可以参考 [inference.py](https://github.com/DataCanvasIO/MMAlaya/blob/main/inference.py)
# Citation
MMAlaya使用<a href="https://github.com/DataCanvasIO/Alaya/blob/main/LICENSE">Apache 2.0 Lisense</a>,开放模型权重,允许商业用途。如果您的项目引用了我们的MMAlaya,请标明出处:
```
@misc{datacanvas2024mmalaya,
author = {DataCanvas Ltd.},
title = {mmalaya},
year = {2024},
howpublished = {\url{https://github.com/DataCanvasIO/MMAlaya}},
}
``` |
sail-rvc/PortalTurret | sail-rvc | "2023-07-14T07:29:59Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:29:47Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# PortalTurret
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:29:59
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
mHossain/bangla-para-v1-410000 | mHossain | "2023-05-05T21:19:39Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-05-05T20:20:53Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v1-410000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v1-410000
This model is a fine-tuned version of [mHossain/bangla-para-v1-380000](https://huggingface.co/mHossain/bangla-para-v1-380000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9209
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.2867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.1627 | 1.0 | 3375 | 0.9209 | 0.0 | 0.0 | 0.0 | 0.0 | 18.2867 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
brew35/b23867ee-df78-45f8-b4c7-8d8dd4a09f52 | brew35 | "2025-02-01T23:21:04Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-01T22:30:00Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b23867ee-df78-45f8-b4c7-8d8dd4a09f52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4651d8fef772b8d4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4651d8fef772b8d4_train_data.json
type:
field_instruction: text
field_output: processed_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: brew35/b23867ee-df78-45f8-b4c7-8d8dd4a09f52
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/4651d8fef772b8d4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d9552b9e-458d-4842-8468-481cf9ba0907
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d9552b9e-458d-4842-8468-481cf9ba0907
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b23867ee-df78-45f8-b4c7-8d8dd4a09f52
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1621 | 0.0379 | 200 | 0.0599 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/simonycl_-_llama-3-8b-instruct-metamath-agg-judge-8bits | RichardErkhov | "2025-03-30T23:15:10Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-30T23:09:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-8b-instruct-metamath-agg-judge - bnb 8bits
- Model creator: https://huggingface.co/simonycl/
- Original model: https://huggingface.co/simonycl/llama-3-8b-instruct-metamath-agg-judge/
Original model description:
---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- simonycl/Meta-Llama-3-8B-Instruct_metamath-Meta-Llama-3-8B-Instruct-annotate-judge-5
model-index:
- name: llama-3-8b-instruct-metamath-agg-judge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-8b-instruct-metamath-agg-judge
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the simonycl/Meta-Llama-3-8B-Instruct_metamath-Meta-Llama-3-8B-Instruct-annotate-judge-5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7013
- Rewards/chosen: -4.0945
- Rewards/rejected: -5.8632
- Rewards/accuracies: 0.7060
- Rewards/margins: 1.7687
- Logps/rejected: -705.5204
- Logps/chosen: -502.4185
- Logits/rejected: -0.8140
- Logits/chosen: -1.0704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2753 | 0.7882 | 400 | 0.7013 | -4.0945 | -5.8632 | 0.7060 | 1.7687 | -705.5204 | -502.4185 | -0.8140 | -1.0704 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
gsn-codes/q-FrozenLake-v1-4x4-noSlippery | gsn-codes | "2023-06-03T04:14:01Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-03T04:13:59Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gsn-codes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k15_task1_organization | MayBashendy | "2025-01-15T01:17:06Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T01:07:30Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k15_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k15_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4342
- Qwk: 0.4303
- Mse: 1.4342
- Rmse: 1.1976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0274 | 2 | 5.4241 | -0.0339 | 5.4241 | 2.3290 |
| No log | 0.0548 | 4 | 3.2173 | 0.0700 | 3.2173 | 1.7937 |
| No log | 0.0822 | 6 | 2.8343 | -0.1321 | 2.8343 | 1.6835 |
| No log | 0.1096 | 8 | 2.8839 | -0.1565 | 2.8839 | 1.6982 |
| No log | 0.1370 | 10 | 2.4133 | -0.0988 | 2.4133 | 1.5535 |
| No log | 0.1644 | 12 | 1.9059 | -0.0548 | 1.9059 | 1.3805 |
| No log | 0.1918 | 14 | 1.3141 | 0.1351 | 1.3141 | 1.1463 |
| No log | 0.2192 | 16 | 1.2165 | 0.1728 | 1.2165 | 1.1030 |
| No log | 0.2466 | 18 | 1.3707 | 0.0632 | 1.3707 | 1.1708 |
| No log | 0.2740 | 20 | 1.8188 | 0.0288 | 1.8188 | 1.3486 |
| No log | 0.3014 | 22 | 2.2172 | -0.0237 | 2.2172 | 1.4890 |
| No log | 0.3288 | 24 | 1.9312 | 0.0251 | 1.9312 | 1.3897 |
| No log | 0.3562 | 26 | 1.4221 | 0.0290 | 1.4221 | 1.1925 |
| No log | 0.3836 | 28 | 1.1666 | 0.3242 | 1.1666 | 1.0801 |
| No log | 0.4110 | 30 | 1.1004 | 0.2939 | 1.1004 | 1.0490 |
| No log | 0.4384 | 32 | 1.0948 | 0.3266 | 1.0948 | 1.0463 |
| No log | 0.4658 | 34 | 1.0828 | 0.3087 | 1.0828 | 1.0406 |
| No log | 0.4932 | 36 | 1.1502 | 0.2590 | 1.1502 | 1.0725 |
| No log | 0.5205 | 38 | 1.2946 | 0.2594 | 1.2946 | 1.1378 |
| No log | 0.5479 | 40 | 1.4440 | 0.0645 | 1.4440 | 1.2017 |
| No log | 0.5753 | 42 | 1.3653 | 0.1424 | 1.3653 | 1.1685 |
| No log | 0.6027 | 44 | 1.2663 | 0.2263 | 1.2663 | 1.1253 |
| No log | 0.6301 | 46 | 1.1350 | 0.2847 | 1.1350 | 1.0654 |
| No log | 0.6575 | 48 | 1.1487 | 0.2917 | 1.1487 | 1.0718 |
| No log | 0.6849 | 50 | 1.1845 | 0.2909 | 1.1845 | 1.0883 |
| No log | 0.7123 | 52 | 1.2423 | 0.3473 | 1.2423 | 1.1146 |
| No log | 0.7397 | 54 | 1.6637 | 0.2321 | 1.6637 | 1.2899 |
| No log | 0.7671 | 56 | 2.1153 | 0.2278 | 2.1153 | 1.4544 |
| No log | 0.7945 | 58 | 2.3962 | 0.2041 | 2.3962 | 1.5480 |
| No log | 0.8219 | 60 | 2.2160 | 0.2200 | 2.2160 | 1.4886 |
| No log | 0.8493 | 62 | 1.8378 | 0.2104 | 1.8378 | 1.3557 |
| No log | 0.8767 | 64 | 1.2348 | 0.3004 | 1.2348 | 1.1112 |
| No log | 0.9041 | 66 | 0.9597 | 0.4343 | 0.9597 | 0.9796 |
| No log | 0.9315 | 68 | 0.9597 | 0.4135 | 0.9597 | 0.9796 |
| No log | 0.9589 | 70 | 1.0380 | 0.3718 | 1.0380 | 1.0188 |
| No log | 0.9863 | 72 | 1.0483 | 0.3652 | 1.0483 | 1.0239 |
| No log | 1.0137 | 74 | 0.9982 | 0.3855 | 0.9982 | 0.9991 |
| No log | 1.0411 | 76 | 0.9629 | 0.3681 | 0.9629 | 0.9813 |
| No log | 1.0685 | 78 | 1.0360 | 0.3519 | 1.0360 | 1.0178 |
| No log | 1.0959 | 80 | 1.4803 | 0.2923 | 1.4803 | 1.2167 |
| No log | 1.1233 | 82 | 1.9171 | 0.1981 | 1.9171 | 1.3846 |
| No log | 1.1507 | 84 | 2.0940 | 0.1898 | 2.0940 | 1.4471 |
| No log | 1.1781 | 86 | 1.7886 | 0.2826 | 1.7886 | 1.3374 |
| No log | 1.2055 | 88 | 1.4346 | 0.3313 | 1.4346 | 1.1978 |
| No log | 1.2329 | 90 | 1.1797 | 0.3828 | 1.1797 | 1.0861 |
| No log | 1.2603 | 92 | 1.1081 | 0.4011 | 1.1081 | 1.0527 |
| No log | 1.2877 | 94 | 1.2066 | 0.3768 | 1.2066 | 1.0985 |
| No log | 1.3151 | 96 | 1.3617 | 0.3677 | 1.3617 | 1.1669 |
| No log | 1.3425 | 98 | 1.3406 | 0.3733 | 1.3406 | 1.1578 |
| No log | 1.3699 | 100 | 1.2476 | 0.3681 | 1.2476 | 1.1170 |
| No log | 1.3973 | 102 | 1.1628 | 0.4073 | 1.1628 | 1.0783 |
| No log | 1.4247 | 104 | 1.1209 | 0.4093 | 1.1209 | 1.0587 |
| No log | 1.4521 | 106 | 0.9532 | 0.4870 | 0.9532 | 0.9763 |
| No log | 1.4795 | 108 | 0.8452 | 0.4869 | 0.8452 | 0.9194 |
| No log | 1.5068 | 110 | 0.8371 | 0.5430 | 0.8371 | 0.9149 |
| No log | 1.5342 | 112 | 0.8477 | 0.5466 | 0.8477 | 0.9207 |
| No log | 1.5616 | 114 | 0.9478 | 0.5371 | 0.9478 | 0.9735 |
| No log | 1.5890 | 116 | 1.2010 | 0.3753 | 1.2010 | 1.0959 |
| No log | 1.6164 | 118 | 1.4765 | 0.2656 | 1.4765 | 1.2151 |
| No log | 1.6438 | 120 | 1.4893 | 0.2619 | 1.4893 | 1.2204 |
| No log | 1.6712 | 122 | 1.2677 | 0.3411 | 1.2677 | 1.1259 |
| No log | 1.6986 | 124 | 1.2124 | 0.3316 | 1.2124 | 1.1011 |
| No log | 1.7260 | 126 | 1.1270 | 0.3525 | 1.1270 | 1.0616 |
| No log | 1.7534 | 128 | 1.0620 | 0.3617 | 1.0620 | 1.0306 |
| No log | 1.7808 | 130 | 1.0050 | 0.3934 | 1.0050 | 1.0025 |
| No log | 1.8082 | 132 | 0.9404 | 0.4549 | 0.9404 | 0.9697 |
| No log | 1.8356 | 134 | 0.8796 | 0.5211 | 0.8796 | 0.9379 |
| No log | 1.8630 | 136 | 0.8794 | 0.5060 | 0.8794 | 0.9378 |
| No log | 1.8904 | 138 | 0.8477 | 0.4975 | 0.8477 | 0.9207 |
| No log | 1.9178 | 140 | 0.8336 | 0.5507 | 0.8336 | 0.9130 |
| No log | 1.9452 | 142 | 0.9322 | 0.5045 | 0.9322 | 0.9655 |
| No log | 1.9726 | 144 | 1.0846 | 0.5082 | 1.0846 | 1.0415 |
| No log | 2.0 | 146 | 1.4007 | 0.4422 | 1.4007 | 1.1835 |
| No log | 2.0274 | 148 | 1.4525 | 0.4343 | 1.4525 | 1.2052 |
| No log | 2.0548 | 150 | 1.2582 | 0.4844 | 1.2582 | 1.1217 |
| No log | 2.0822 | 152 | 1.0291 | 0.5000 | 1.0291 | 1.0145 |
| No log | 2.1096 | 154 | 0.8320 | 0.6002 | 0.8320 | 0.9121 |
| No log | 2.1370 | 156 | 0.8766 | 0.5570 | 0.8766 | 0.9363 |
| No log | 2.1644 | 158 | 0.9921 | 0.5642 | 0.9921 | 0.9960 |
| No log | 2.1918 | 160 | 1.0201 | 0.5245 | 1.0201 | 1.0100 |
| No log | 2.2192 | 162 | 0.9788 | 0.5778 | 0.9788 | 0.9894 |
| No log | 2.2466 | 164 | 0.8084 | 0.6131 | 0.8084 | 0.8991 |
| No log | 2.2740 | 166 | 0.8161 | 0.6102 | 0.8161 | 0.9034 |
| No log | 2.3014 | 168 | 1.0668 | 0.5006 | 1.0668 | 1.0329 |
| No log | 2.3288 | 170 | 1.2055 | 0.4988 | 1.2055 | 1.0980 |
| No log | 2.3562 | 172 | 1.3629 | 0.4723 | 1.3629 | 1.1674 |
| No log | 2.3836 | 174 | 1.0366 | 0.4866 | 1.0366 | 1.0181 |
| No log | 2.4110 | 176 | 0.7316 | 0.6548 | 0.7316 | 0.8553 |
| No log | 2.4384 | 178 | 0.7840 | 0.6126 | 0.7840 | 0.8854 |
| No log | 2.4658 | 180 | 0.9270 | 0.5892 | 0.9270 | 0.9628 |
| No log | 2.4932 | 182 | 1.0072 | 0.5512 | 1.0072 | 1.0036 |
| No log | 2.5205 | 184 | 0.9453 | 0.5641 | 0.9453 | 0.9723 |
| No log | 2.5479 | 186 | 0.8298 | 0.6261 | 0.8298 | 0.9109 |
| No log | 2.5753 | 188 | 0.7460 | 0.6154 | 0.7460 | 0.8637 |
| No log | 2.6027 | 190 | 0.7322 | 0.5991 | 0.7322 | 0.8557 |
| No log | 2.6301 | 192 | 0.7689 | 0.6039 | 0.7689 | 0.8769 |
| No log | 2.6575 | 194 | 0.7966 | 0.5999 | 0.7966 | 0.8925 |
| No log | 2.6849 | 196 | 0.7654 | 0.6029 | 0.7654 | 0.8749 |
| No log | 2.7123 | 198 | 0.7443 | 0.5941 | 0.7443 | 0.8628 |
| No log | 2.7397 | 200 | 0.7420 | 0.6198 | 0.7420 | 0.8614 |
| No log | 2.7671 | 202 | 0.7633 | 0.5933 | 0.7633 | 0.8737 |
| No log | 2.7945 | 204 | 0.7814 | 0.5505 | 0.7814 | 0.8840 |
| No log | 2.8219 | 206 | 0.8173 | 0.5592 | 0.8173 | 0.9040 |
| No log | 2.8493 | 208 | 0.8610 | 0.5866 | 0.8610 | 0.9279 |
| No log | 2.8767 | 210 | 0.8581 | 0.6018 | 0.8581 | 0.9264 |
| No log | 2.9041 | 212 | 0.8224 | 0.5608 | 0.8224 | 0.9068 |
| No log | 2.9315 | 214 | 0.8207 | 0.5513 | 0.8207 | 0.9059 |
| No log | 2.9589 | 216 | 0.8474 | 0.5449 | 0.8474 | 0.9205 |
| No log | 2.9863 | 218 | 0.8660 | 0.5573 | 0.8660 | 0.9306 |
| No log | 3.0137 | 220 | 0.9340 | 0.5954 | 0.9340 | 0.9665 |
| No log | 3.0411 | 222 | 0.9299 | 0.5992 | 0.9299 | 0.9643 |
| No log | 3.0685 | 224 | 0.8808 | 0.5980 | 0.8808 | 0.9385 |
| No log | 3.0959 | 226 | 0.8191 | 0.5766 | 0.8191 | 0.9050 |
| No log | 3.1233 | 228 | 0.8094 | 0.5532 | 0.8094 | 0.8997 |
| No log | 3.1507 | 230 | 0.8363 | 0.5277 | 0.8363 | 0.9145 |
| No log | 3.1781 | 232 | 0.8181 | 0.5304 | 0.8181 | 0.9045 |
| No log | 3.2055 | 234 | 0.8230 | 0.5679 | 0.8230 | 0.9072 |
| No log | 3.2329 | 236 | 0.8445 | 0.5224 | 0.8445 | 0.9190 |
| No log | 3.2603 | 238 | 0.8686 | 0.6046 | 0.8686 | 0.9320 |
| No log | 3.2877 | 240 | 1.0920 | 0.5429 | 1.0920 | 1.0450 |
| No log | 3.3151 | 242 | 1.0344 | 0.5490 | 1.0344 | 1.0170 |
| No log | 3.3425 | 244 | 0.8241 | 0.5823 | 0.8241 | 0.9078 |
| No log | 3.3699 | 246 | 0.8872 | 0.5611 | 0.8872 | 0.9419 |
| No log | 3.3973 | 248 | 1.0130 | 0.5706 | 1.0130 | 1.0065 |
| No log | 3.4247 | 250 | 0.9121 | 0.5174 | 0.9121 | 0.9550 |
| No log | 3.4521 | 252 | 0.7662 | 0.5640 | 0.7662 | 0.8753 |
| No log | 3.4795 | 254 | 0.7825 | 0.5859 | 0.7825 | 0.8846 |
| No log | 3.5068 | 256 | 0.9138 | 0.5662 | 0.9138 | 0.9559 |
| No log | 3.5342 | 258 | 0.9908 | 0.5232 | 0.9908 | 0.9954 |
| No log | 3.5616 | 260 | 1.0046 | 0.5347 | 1.0046 | 1.0023 |
| No log | 3.5890 | 262 | 0.9161 | 0.5548 | 0.9161 | 0.9571 |
| No log | 3.6164 | 264 | 0.8314 | 0.5849 | 0.8314 | 0.9118 |
| No log | 3.6438 | 266 | 0.8062 | 0.5828 | 0.8062 | 0.8979 |
| No log | 3.6712 | 268 | 0.8389 | 0.6146 | 0.8389 | 0.9159 |
| No log | 3.6986 | 270 | 0.8192 | 0.5999 | 0.8192 | 0.9051 |
| No log | 3.7260 | 272 | 0.8006 | 0.5999 | 0.8006 | 0.8948 |
| No log | 3.7534 | 274 | 0.7863 | 0.6001 | 0.7863 | 0.8867 |
| No log | 3.7808 | 276 | 0.8460 | 0.5804 | 0.8460 | 0.9198 |
| No log | 3.8082 | 278 | 0.9878 | 0.4939 | 0.9878 | 0.9939 |
| No log | 3.8356 | 280 | 1.0962 | 0.5021 | 1.0962 | 1.0470 |
| No log | 3.8630 | 282 | 1.1068 | 0.5007 | 1.1068 | 1.0520 |
| No log | 3.8904 | 284 | 1.0624 | 0.4942 | 1.0624 | 1.0308 |
| No log | 3.9178 | 286 | 1.0041 | 0.5287 | 1.0041 | 1.0021 |
| No log | 3.9452 | 288 | 0.8922 | 0.5493 | 0.8922 | 0.9446 |
| No log | 3.9726 | 290 | 0.8116 | 0.6055 | 0.8116 | 0.9009 |
| No log | 4.0 | 292 | 0.7825 | 0.5796 | 0.7825 | 0.8846 |
| No log | 4.0274 | 294 | 0.7623 | 0.5920 | 0.7623 | 0.8731 |
| No log | 4.0548 | 296 | 0.7719 | 0.6062 | 0.7719 | 0.8786 |
| No log | 4.0822 | 298 | 0.8257 | 0.5806 | 0.8257 | 0.9087 |
| No log | 4.1096 | 300 | 0.9352 | 0.5329 | 0.9352 | 0.9670 |
| No log | 4.1370 | 302 | 0.9327 | 0.5392 | 0.9327 | 0.9658 |
| No log | 4.1644 | 304 | 0.8358 | 0.5818 | 0.8358 | 0.9142 |
| No log | 4.1918 | 306 | 0.7466 | 0.6313 | 0.7466 | 0.8641 |
| No log | 4.2192 | 308 | 0.7337 | 0.6410 | 0.7337 | 0.8565 |
| No log | 4.2466 | 310 | 0.7460 | 0.6275 | 0.7460 | 0.8637 |
| No log | 4.2740 | 312 | 0.7727 | 0.6006 | 0.7727 | 0.8790 |
| No log | 4.3014 | 314 | 0.9044 | 0.5557 | 0.9044 | 0.9510 |
| No log | 4.3288 | 316 | 1.1197 | 0.5166 | 1.1197 | 1.0582 |
| No log | 4.3562 | 318 | 1.2738 | 0.4613 | 1.2738 | 1.1286 |
| No log | 4.3836 | 320 | 1.2519 | 0.4829 | 1.2519 | 1.1189 |
| No log | 4.4110 | 322 | 1.0633 | 0.5260 | 1.0633 | 1.0312 |
| No log | 4.4384 | 324 | 0.8332 | 0.6060 | 0.8332 | 0.9128 |
| No log | 4.4658 | 326 | 0.7973 | 0.5990 | 0.7973 | 0.8929 |
| No log | 4.4932 | 328 | 0.7903 | 0.5889 | 0.7903 | 0.8890 |
| No log | 4.5205 | 330 | 0.7616 | 0.5993 | 0.7616 | 0.8727 |
| No log | 4.5479 | 332 | 0.7833 | 0.5947 | 0.7833 | 0.8850 |
| No log | 4.5753 | 334 | 0.8322 | 0.5972 | 0.8322 | 0.9123 |
| No log | 4.6027 | 336 | 0.8362 | 0.5826 | 0.8362 | 0.9144 |
| No log | 4.6301 | 338 | 0.7945 | 0.5857 | 0.7945 | 0.8914 |
| No log | 4.6575 | 340 | 0.7557 | 0.5738 | 0.7557 | 0.8693 |
| No log | 4.6849 | 342 | 0.7284 | 0.5950 | 0.7284 | 0.8535 |
| No log | 4.7123 | 344 | 0.7226 | 0.6323 | 0.7226 | 0.8500 |
| No log | 4.7397 | 346 | 0.7268 | 0.6141 | 0.7268 | 0.8525 |
| No log | 4.7671 | 348 | 0.7920 | 0.5256 | 0.7920 | 0.8900 |
| No log | 4.7945 | 350 | 1.0256 | 0.5299 | 1.0256 | 1.0127 |
| No log | 4.8219 | 352 | 1.1870 | 0.4747 | 1.1870 | 1.0895 |
| No log | 4.8493 | 354 | 1.2002 | 0.4828 | 1.2002 | 1.0955 |
| No log | 4.8767 | 356 | 1.0658 | 0.5101 | 1.0658 | 1.0324 |
| No log | 4.9041 | 358 | 0.9661 | 0.5742 | 0.9661 | 0.9829 |
| No log | 4.9315 | 360 | 0.9620 | 0.5815 | 0.9620 | 0.9808 |
| No log | 4.9589 | 362 | 1.0531 | 0.5633 | 1.0531 | 1.0262 |
| No log | 4.9863 | 364 | 1.0957 | 0.5286 | 1.0957 | 1.0467 |
| No log | 5.0137 | 366 | 1.0850 | 0.5338 | 1.0850 | 1.0416 |
| No log | 5.0411 | 368 | 1.0877 | 0.5128 | 1.0877 | 1.0429 |
| No log | 5.0685 | 370 | 1.1431 | 0.4936 | 1.1431 | 1.0692 |
| No log | 5.0959 | 372 | 1.3236 | 0.4455 | 1.3236 | 1.1505 |
| No log | 5.1233 | 374 | 1.4752 | 0.3641 | 1.4752 | 1.2146 |
| No log | 5.1507 | 376 | 1.4252 | 0.3850 | 1.4252 | 1.1938 |
| No log | 5.1781 | 378 | 1.2053 | 0.4501 | 1.2053 | 1.0979 |
| No log | 5.2055 | 380 | 0.9910 | 0.5581 | 0.9910 | 0.9955 |
| No log | 5.2329 | 382 | 0.9226 | 0.5934 | 0.9226 | 0.9605 |
| No log | 5.2603 | 384 | 0.8915 | 0.5935 | 0.8915 | 0.9442 |
| No log | 5.2877 | 386 | 0.9670 | 0.5753 | 0.9670 | 0.9833 |
| No log | 5.3151 | 388 | 1.1171 | 0.5321 | 1.1171 | 1.0569 |
| No log | 5.3425 | 390 | 1.1020 | 0.5741 | 1.1020 | 1.0498 |
| No log | 5.3699 | 392 | 1.1399 | 0.5425 | 1.1399 | 1.0677 |
| No log | 5.3973 | 394 | 1.1955 | 0.5155 | 1.1955 | 1.0934 |
| No log | 5.4247 | 396 | 1.0920 | 0.5134 | 1.0920 | 1.0450 |
| No log | 5.4521 | 398 | 0.9864 | 0.5216 | 0.9864 | 0.9932 |
| No log | 5.4795 | 400 | 0.9037 | 0.5320 | 0.9037 | 0.9506 |
| No log | 5.5068 | 402 | 0.8805 | 0.5463 | 0.8805 | 0.9384 |
| No log | 5.5342 | 404 | 0.8771 | 0.5486 | 0.8771 | 0.9366 |
| No log | 5.5616 | 406 | 0.8814 | 0.6011 | 0.8814 | 0.9388 |
| No log | 5.5890 | 408 | 0.8395 | 0.6091 | 0.8395 | 0.9162 |
| No log | 5.6164 | 410 | 0.8424 | 0.6241 | 0.8424 | 0.9178 |
| No log | 5.6438 | 412 | 0.9268 | 0.5639 | 0.9268 | 0.9627 |
| No log | 5.6712 | 414 | 1.1021 | 0.4971 | 1.1021 | 1.0498 |
| No log | 5.6986 | 416 | 1.2086 | 0.4903 | 1.2086 | 1.0993 |
| No log | 5.7260 | 418 | 1.1475 | 0.4971 | 1.1475 | 1.0712 |
| No log | 5.7534 | 420 | 1.0505 | 0.5155 | 1.0505 | 1.0250 |
| No log | 5.7808 | 422 | 0.9972 | 0.5253 | 0.9972 | 0.9986 |
| No log | 5.8082 | 424 | 0.9341 | 0.5534 | 0.9341 | 0.9665 |
| No log | 5.8356 | 426 | 0.9878 | 0.5592 | 0.9879 | 0.9939 |
| No log | 5.8630 | 428 | 1.0541 | 0.5731 | 1.0541 | 1.0267 |
| No log | 5.8904 | 430 | 1.0357 | 0.5825 | 1.0357 | 1.0177 |
| No log | 5.9178 | 432 | 0.9785 | 0.5956 | 0.9785 | 0.9892 |
| No log | 5.9452 | 434 | 0.8679 | 0.6049 | 0.8679 | 0.9316 |
| No log | 5.9726 | 436 | 0.7931 | 0.6114 | 0.7931 | 0.8906 |
| No log | 6.0 | 438 | 0.7550 | 0.6036 | 0.7550 | 0.8689 |
| No log | 6.0274 | 440 | 0.7454 | 0.6453 | 0.7454 | 0.8634 |
| No log | 6.0548 | 442 | 0.7479 | 0.6436 | 0.7479 | 0.8648 |
| No log | 6.0822 | 444 | 0.7658 | 0.6327 | 0.7658 | 0.8751 |
| No log | 6.1096 | 446 | 0.8526 | 0.5803 | 0.8526 | 0.9233 |
| No log | 6.1370 | 448 | 0.9616 | 0.5446 | 0.9616 | 0.9806 |
| No log | 6.1644 | 450 | 0.9365 | 0.5454 | 0.9365 | 0.9677 |
| No log | 6.1918 | 452 | 0.8529 | 0.5621 | 0.8529 | 0.9235 |
| No log | 6.2192 | 454 | 0.7884 | 0.5898 | 0.7884 | 0.8879 |
| No log | 6.2466 | 456 | 0.7566 | 0.5872 | 0.7566 | 0.8698 |
| No log | 6.2740 | 458 | 0.7884 | 0.5492 | 0.7884 | 0.8879 |
| No log | 6.3014 | 460 | 0.7828 | 0.6132 | 0.7828 | 0.8848 |
| No log | 6.3288 | 462 | 0.7774 | 0.6148 | 0.7774 | 0.8817 |
| No log | 6.3562 | 464 | 0.7988 | 0.5939 | 0.7988 | 0.8937 |
| No log | 6.3836 | 466 | 0.8007 | 0.6099 | 0.8007 | 0.8948 |
| No log | 6.4110 | 468 | 0.8112 | 0.6079 | 0.8112 | 0.9007 |
| No log | 6.4384 | 470 | 0.9288 | 0.5247 | 0.9288 | 0.9638 |
| No log | 6.4658 | 472 | 1.0515 | 0.5229 | 1.0515 | 1.0254 |
| No log | 6.4932 | 474 | 1.0911 | 0.4943 | 1.0911 | 1.0446 |
| No log | 6.5205 | 476 | 1.1406 | 0.4652 | 1.1406 | 1.0680 |
| No log | 6.5479 | 478 | 1.1275 | 0.4755 | 1.1275 | 1.0618 |
| No log | 6.5753 | 480 | 1.0940 | 0.5163 | 1.0940 | 1.0459 |
| No log | 6.6027 | 482 | 0.9352 | 0.5719 | 0.9352 | 0.9670 |
| No log | 6.6301 | 484 | 0.8179 | 0.6143 | 0.8179 | 0.9044 |
| No log | 6.6575 | 486 | 0.8642 | 0.6241 | 0.8642 | 0.9296 |
| No log | 6.6849 | 488 | 0.9283 | 0.6166 | 0.9283 | 0.9635 |
| No log | 6.7123 | 490 | 1.0792 | 0.5543 | 1.0792 | 1.0388 |
| No log | 6.7397 | 492 | 1.1565 | 0.5306 | 1.1565 | 1.0754 |
| No log | 6.7671 | 494 | 1.2049 | 0.5279 | 1.2049 | 1.0977 |
| No log | 6.7945 | 496 | 1.1066 | 0.4992 | 1.1066 | 1.0520 |
| No log | 6.8219 | 498 | 0.9919 | 0.5333 | 0.9919 | 0.9960 |
| 0.468 | 6.8493 | 500 | 1.0055 | 0.5245 | 1.0055 | 1.0028 |
| 0.468 | 6.8767 | 502 | 1.0046 | 0.5308 | 1.0046 | 1.0023 |
| 0.468 | 6.9041 | 504 | 0.9039 | 0.5728 | 0.9039 | 0.9507 |
| 0.468 | 6.9315 | 506 | 0.7984 | 0.6362 | 0.7984 | 0.8935 |
| 0.468 | 6.9589 | 508 | 0.7766 | 0.6362 | 0.7766 | 0.8812 |
| 0.468 | 6.9863 | 510 | 0.8437 | 0.6172 | 0.8437 | 0.9185 |
| 0.468 | 7.0137 | 512 | 1.0259 | 0.5188 | 1.0259 | 1.0129 |
| 0.468 | 7.0411 | 514 | 1.3055 | 0.4659 | 1.3055 | 1.1426 |
| 0.468 | 7.0685 | 516 | 1.5696 | 0.4576 | 1.5696 | 1.2528 |
| 0.468 | 7.0959 | 518 | 1.5971 | 0.4458 | 1.5971 | 1.2638 |
| 0.468 | 7.1233 | 520 | 1.4342 | 0.4303 | 1.4342 | 1.1976 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
yeongsang2/polyglot-ko-12.8B-v.1.02-checkpoint-4500-cbnu | yeongsang2 | "2023-08-24T02:17:20Z" | 2 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-24T02:11:22Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
debesu/brhanubal-meru | debesu | "2025-02-19T08:20:22Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-19T07:27:47Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: brhanubal-meru
---
# Brhanubal Meru
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `brhanubal-meru` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('debesu/brhanubal-meru', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tuantmdev/081013e0-45c6-4420-8473-606a260bc93a | tuantmdev | "2025-01-26T11:03:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-26T10:44:07Z" | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 081013e0-45c6-4420-8473-606a260bc93a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e67dcd9c278ad31_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e67dcd9c278ad31_train_data.json
type:
field_instruction: Source
field_output: Sentence
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuantmdev/081013e0-45c6-4420-8473-606a260bc93a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/3e67dcd9c278ad31_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e605bf2b-967c-4862-91e2-56aa39235641
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e605bf2b-967c-4862-91e2-56aa39235641
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 081013e0-45c6-4420-8473-606a260bc93a
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 5.9289 |
| 22.7641 | 0.0046 | 10 | 5.4268 |
| 18.4286 | 0.0093 | 20 | 4.1988 |
| 14.7304 | 0.0139 | 30 | 3.7866 |
| 14.8912 | 0.0186 | 40 | 3.6673 |
| 15.4443 | 0.0232 | 50 | 3.6461 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
eduiqe/u8-LunarLander | eduiqe | "2023-05-31T07:29:00Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-31T07:22:31Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -226.85 +/- 113.36
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
AnkushRaut216/Contrastive-Finetuned-for-AI-all-MiniLM-L6-V2 | AnkushRaut216 | "2025-04-15T02:04:02Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-04-15T01:58:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
xuandin/semviqa-tc-infoxlm-viwikifc | xuandin | "2025-03-07T02:54:31Z" | 0 | 0 | null | [
"safetensors",
"claim_verification",
"region:us"
] | null | "2025-03-07T02:41:51Z" | ```Python
import torch
import torch.nn.functional as F
tokenizer = AutoTokenizer.from_pretrained("xuandin/semviqa-tc-infoxlm-viwikifc")
model = ClaimModelForClassification.from_pretrained("xuandin/semviqa-tc-infoxlm-viwikifc")
claim = "Chiến tranh với Campuchia đã kết thúc trước khi Việt Nam thống nhất."
evidence = "Sau khi thống nhất, Việt Nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh Liên Xô cùng Khối phía Đông, các lệnh cấm vận của Hoa Kỳ, chiến tranh với Campuchia, biên giới giáp Trung Quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng."
inputs = tokenizer(
claim,
evidence,
truncation="only_second",
add_special_tokens=True,
max_length=256,
padding='max_length',
return_attention_mask=True,
return_token_type_ids=False,
return_tensors='pt',
)
labels = ["NEI", "SUPPORTED", "REFUTED"]
with torch.no_grad():
outputs = model(**inputs)
logits = outputs["logits"]
probabilities = F.softmax(logits, dim=1).squeeze()
for i, (label, prob) in enumerate(zip(labels, probabilities.tolist()), start=1):
print(f"{i}) {label} {prob:.4f}")
# 1) NEI 0.0001
# 2) SUPPORTED 0.0001
# 3) REFUTED 0.9998
``` |
RichardErkhov/ethzanalytics_-_ai-msgbot-gpt2-L-dialogue-4bits | RichardErkhov | "2025-03-08T11:53:51Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-08T11:53:21Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ai-msgbot-gpt2-L-dialogue - bnb 4bits
- Model creator: https://huggingface.co/ethzanalytics/
- Original model: https://huggingface.co/ethzanalytics/ai-msgbot-gpt2-L-dialogue/
Original model description:
# ai-msgbot GPT2-L + daily dialogues
_NOTE: this model card is a WIP_
GPT2-L (774M parameters) fine-tuned on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using `aitextgen`. This model was then subsequently further fine-tuned on the [Daily Dialogues](http://yanran.li/dailydialog) dataset for an additional 40k steps, this time with **35** of 36 layers frozen.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?
person beta:
no, i don't like
```
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
richinfoai/ritrieve_zh_v1 | richinfoai | "2025-03-25T02:40:34Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"zh",
"dataset:BAAI/Infinity-Instruct",
"dataset:opencsg/chinese-fineweb-edu",
"arxiv:2412.19048",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-25T01:23:29Z" | ---
datasets:
- BAAI/Infinity-Instruct
- opencsg/chinese-fineweb-edu
language:
- zh
pipeline_tag: sentence-similarity
library_name: sentence-transformers
license: mit
---
## Introduction
This model was trained by [richinfoai](https://www.richinfo.cn/).
Followed [Stella and Jasper models](https://arxiv.org/pdf/2412.19048), we do distillation training from
[lier007/xiaobu-embedding-v2](https://huggingface.co/lier007/xiaobu-embedding-v2),
[dunzhang/stella-large-zh-v3-1792d](https://huggingface.co/dunzhang/stella-large-zh-v3-1792d)
and [BAAI/bge-multilingual-gemma2](https://huggingface.co/BAAI/bge-multilingual-gemma2).
Thanks to their outstanding performance, our model has achieved excellent results on MTEB(cmn, v1).
We believe this model once again demonstrates the effectiveness of distillation learning.
In the future, we will train more bilingual vector models based on various excellent vector training methods.
## Methods
### Stage1
We use [BAAI/Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct)
and [opencsg/chinese-fineweb-edu](https://huggingface.co/datasets/opencsg/chinese-fineweb-edu)
as training data to do a distillation from the above three models.
In this stage, we only use cosine-loss.
### Stage2
The objective of stage2 is reducing dimensions.
We use the same training data as the stage1 with `similarity loss`. After stage2, the dimensions of our model is 1792.
## Usage
This model does not need instructions and you can use it in `SentenceTransformer`:
```python
import os
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
from sentence_transformers import SentenceTransformer
text_encoder = SentenceTransformer("richinfoai/ritrieve_zh_v1")
texts = [
"什么是人工智能",
"介绍一下主流的LLM",
"人工智能(AI)是模拟人类智能的计算机系统,能够执行学习、推理和决策等任务。它通过算法和大数据实现自动化,广泛应用于各行各业。"
]
vectors = text_encoder.encode(texts, normalize_embeddings=True)
print(vectors @ vectors.T)
# [[0.9999999 0.67707014 0.91421044]
# [0.67707014 0.9999998 0.6353945 ]
# [0.91421044 0.6353945 1.0000001 ]]
``` |
JAdeojo/xlm-roberta-large-lora-consumer-complaints-cfpb_checkpoint2 | JAdeojo | "2023-07-28T13:13:13Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-28T13:13:07Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Professor/distilled-inkubaLM | Professor | "2025-04-07T15:10:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T14:51:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
allstax/AI-G-Expand | allstax | "2024-02-21T20:51:15Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-02-21T18:09:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mudogruer/Gemma-7b-MedMCQA | mudogruer | "2024-05-10T22:41:14Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b-it",
"base_model:adapter:google/gemma-7b-it",
"region:us"
] | null | "2024-05-10T22:40:07Z" | ---
library_name: peft
base_model: google/gemma-7b-it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
maithal/my_awesome_model | maithal | "2025-02-16T16:21:38Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-14T11:24:24Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: maithal/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# maithal/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1376
- Validation Loss: 0.2027
- Train Accuracy: 0.9299
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2545 | 0.1986 | 0.9224 | 0 |
| 0.1376 | 0.2027 | 0.9299 | 1 |
### Framework versions
- Transformers 4.48.3
- TensorFlow 2.18.0
- Datasets 3.3.0
- Tokenizers 0.21.0
|
ProomptEngineer/pe-caricature-style | ProomptEngineer | "2023-09-01T10:11:30Z" | 9 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-01T10:11:27Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PECaricature
widget:
- text: PECaricature
---
# PE Caricature [Style]

<h2 id="heading-63">If you want to donate:</h2><h2 id="heading-64"><a target="_blank" rel="ugc" href="https://ko-fi.com/proomptengineer">https://ko-fi.com/proomptengineer</a></h2><h2 id="heading-4">Model creates a cartoonish real caricature.</h2><h2 id="heading-5">Recommended weights 0.8-1</h2><h2 id="heading-6">Sometimes creats random person idk why.</h2>
## Image examples for the model:









|
Gregorig/roberta-large-finetuned-t_value | Gregorig | "2024-06-05T20:19:24Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-05T20:18:23Z" | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-finetuned-t_value
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-t_value
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0488
- Accuracy: 0.985
- F1: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7103 | 1.0 | 51 | 0.2806 | 0.975 | 0.9756 |
| 0.4484 | 2.0 | 102 | 0.0968 | 0.985 | 0.9846 |
| 0.246 | 3.0 | 153 | 0.0488 | 0.985 | 0.9860 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
alnrg2arg/blockchainlabs_test3_seminar | alnrg2arg | "2024-02-02T01:55:01Z" | 50 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-02T01:51:09Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
---
# blockchainlabs_test3_seminar
blockchainlabs_test3_seminar is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 32]
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: #bfloat16 #bfloat16이 float16보다 학습할때 더 빠릅니다.
``` |
ewre324/ewre324-Thinker-SmolLM2-135M-Instruct-Reasoning | ewre324 | "2025-01-07T04:30:18Z" | 32 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"onnx",
"transformers.js",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-07T03:34:27Z" | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
base_model:
- HuggingFaceTB/SmolLM2-135M
---
This model is aimed at Chain of Thought and has been trained on human generated, AI Reasoned questions and answers https://huggingface.co/datasets/KingNish/reasoning-base-20k .
# Uploaded model
- **Developed by:** ewre324
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceTB/SmolLM2-135M-Instruct
# SmolLM2-Reasoning
## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 135M model was trained on 2 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling (for the 1.7B) thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk and finetuning code at https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-135M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is gravity?"}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-135M-Instruct --device cpu
```
## Evaluation
TODO
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 2T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 2 A100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
RayneAmes/ichigo_v1 | RayneAmes | "2025-02-10T18:39:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-10T18:36:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
patrickrho/evryai-e1-m-ko-v7-Q8_0-GGUF | patrickrho | "2025-03-07T05:05:03Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:patrickrho/evryai-e1-m-ko-v7",
"base_model:quantized:patrickrho/evryai-e1-m-ko-v7",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-07T05:02:21Z" | ---
base_model: patrickrho/evryai-e1-m-ko-v7
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# patrickrho/evryai-e1-m-ko-v7-Q8_0-GGUF
This model was converted to GGUF format from [`patrickrho/evryai-e1-m-ko-v7`](https://huggingface.co/patrickrho/evryai-e1-m-ko-v7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/patrickrho/evryai-e1-m-ko-v7) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo patrickrho/evryai-e1-m-ko-v7-Q8_0-GGUF --hf-file evryai-e1-m-ko-v7-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo patrickrho/evryai-e1-m-ko-v7-Q8_0-GGUF --hf-file evryai-e1-m-ko-v7-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo patrickrho/evryai-e1-m-ko-v7-Q8_0-GGUF --hf-file evryai-e1-m-ko-v7-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo patrickrho/evryai-e1-m-ko-v7-Q8_0-GGUF --hf-file evryai-e1-m-ko-v7-q8_0.gguf -c 2048
```
|
fifxus/d1d7b901-f8da-437a-a1c9-7f06ad820f53 | fifxus | "2025-02-08T10:11:29Z" | 15 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-08T09:52:20Z" | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1d7b901-f8da-437a-a1c9-7f06ad820f53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1d92931102f6ed76_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1d92931102f6ed76_train_data.json
type:
field_instruction: message_1
field_output: message_2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/d1d7b901-f8da-437a-a1c9-7f06ad820f53
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/1d92931102f6ed76_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a087b1b9-ecc0-4d6c-ab2f-9d8295de3014
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: a087b1b9-ecc0-4d6c-ab2f-9d8295de3014
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# d1d7b901-f8da-437a-a1c9-7f06ad820f53
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.3111 | 0.2127 | 500 | 1.5179 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ykarout/Phi4-ThinkMode | ykarout | "2025-03-26T13:08:25Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-26T13:08:20Z" | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ykarout
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
homeb82784/qwen2.5-7b-instruct-cpt-v8.0 | homeb82784 | "2024-12-06T12:06:26Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-06T12:02:30Z" | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** homeb82784
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/volo_d4_448.sail_in1k-turbo-tiny-green-smashed | PrunaAI | "2024-08-02T15:41:08Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-19T13:30:46Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir volo_d4_448.sail_in1k-turbo-tiny-green-smashed
huggingface-cli download PrunaAI/volo_d4_448.sail_in1k-turbo-tiny-green-smashed --local-dir volo_d4_448.sail_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "volo_d4_448.sail_in1k-turbo-tiny-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "volo_d4_448.sail_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model volo_d4_448.sail_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
albertus-sussex/simcse-test-book-reference_5_to_verify_5-fold-2-bs-256-lr-3e-05-epochs-3-uq-True | albertus-sussex | "2025-03-25T11:27:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-25T11:26:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katielink/esm_if1_gvp4_t16_142M_UR50 | katielink | "2023-09-13T12:59:35Z" | 0 | 0 | null | [
"biology",
"protein",
"license:mit",
"region:us"
] | null | "2023-09-08T14:19:22Z" | ---
license: mit
tags:
- biology
- protein
---
# ESM-IF
Checkpoint of the ESM Inverse Folding model.
Please see the ESM team's [Github repo](https://github.com/facebookresearch/esm/blob/main/README.md#invf) for more information.
## Citations
If you find the models useful in your research, we ask that you cite the relevant papers:
```bibtex
@article{rives2019biological,
author={Rives, Alexander and Meier, Joshua and Sercu, Tom and Goyal, Siddharth and Lin, Zeming and Liu, Jason and Guo, Demi and Ott, Myle and Zitnick, C. Lawrence and Ma, Jerry and Fergus, Rob},
title={Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences},
year={2019},
doi={10.1101/622803},
url={https://www.biorxiv.org/content/10.1101/622803v4},
journal={PNAS}
}
```
For the self-attention contact prediction:
```bibtex
@article{rao2020transformer,
author = {Rao, Roshan M and Meier, Joshua and Sercu, Tom and Ovchinnikov, Sergey and Rives, Alexander},
title={Transformer protein language models are unsupervised structure learners},
year={2020},
doi={10.1101/2020.12.15.422761},
url={https://www.biorxiv.org/content/10.1101/2020.12.15.422761v1},
journal={bioRxiv}
}
```
For inverse folding using ESM-IF1:
```bibtex
@article{hsu2022learning,
author = {Hsu, Chloe and Verkuil, Robert and Liu, Jason and Lin, Zeming and Hie, Brian and Sercu, Tom and Lerer, Adam and Rives, Alexander},
title = {Learning inverse folding from millions of predicted structures},
year = {2022},
doi = {10.1101/2022.04.10.487779},
url = {https://www.biorxiv.org/content/early/2022/04/10/2022.04.10.487779},
journal = {ICML}
}
``` |
twilightBOO/pov-skin-textures-dreamlike-r34-v2 | twilightBOO | "2023-01-31T00:32:50Z" | 12 | 9 | diffusers | [
"diffusers",
"nsfw",
"stable diffusion",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-23T19:55:08Z" | ---
license: openrail
tags:
- nsfw
- stable diffusion
---
# PoV Skin Textures - Dreamlike r34
[pov-skin-texture-dreamlike-r34](https://civitai.com/models/4481/pov-skin-texture-dreamlike-r34)
This version has vae-ft-mse-840000-ema-pruned.ckpt baked in.
Due to using Dreamlike Diffusion 1.0, this model has the following license:
License
This model is licensed under a modified CreativeML OpenRAIL-M license.
- You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at [email protected]
- You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md |
thakkkkkk/dcf42ae2-bdbd-4e48-8728-5b9f2190a327 | thakkkkkk | "2025-01-14T22:59:20Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-14T22:38:42Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dcf42ae2-bdbd-4e48-8728-5b9f2190a327
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bbd1d69279f50e69_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bbd1d69279f50e69_train_data.json
type:
field_instruction: justification
field_output: enhanced_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/dcf42ae2-bdbd-4e48-8728-5b9f2190a327
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/bbd1d69279f50e69_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 415857d9-2abf-4581-8ee9-0b6e65200674
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 415857d9-2abf-4581-8ee9-0b6e65200674
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dcf42ae2-bdbd-4e48-8728-5b9f2190a327
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.4605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.607 | 0.4603 | 200 | 10.4605 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
haozhangphy/Taxi-v3 | haozhangphy | "2023-09-12T06:57:51Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-12T06:57:45Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="haozhangphy/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
xander71988/t5-small-finetuned-facet-contract-type | xander71988 | "2023-02-03T14:01:02Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-02-03T13:21:14Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: xander71988/t5-small-finetuned-facet-contract-type
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xander71988/t5-small-finetuned-facet-contract-type
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1701
- Validation Loss: 0.1605
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 7000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8446 | 0.3244 | 0 |
| 0.2976 | 0.1945 | 1 |
| 0.2240 | 0.1686 | 2 |
| 0.1970 | 0.1763 | 3 |
| 0.1866 | 0.1548 | 4 |
| 0.1793 | 0.1565 | 5 |
| 0.1701 | 0.1605 | 6 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gulaschnascher4000/lora_0-3_3B | gulaschnascher4000 | "2025-01-14T02:03:23Z" | 7 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | "2025-01-14T01:54:37Z" | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: lora_0-3_3B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_0-3_3B
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the gulaschnascher4000/stream-dataset-0-2 and the identity-chatgulaschpt datasets.
It achieves the following results on the evaluation set:
- Loss: 1.7561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
scale_parameter=True, relative_step=True, warmup_init=True, lr=None
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 0.5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6649 | 0.4505 | 500 | 1.7574 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
gowrias12/facebook-opt-1p3b-text-to-sql | gowrias12 | "2024-05-07T17:41:28Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | "2024-05-06T20:20:04Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: facebook/opt-1.3b
datasets:
- generator
model-index:
- name: facebook-opt-1p3b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook-opt-1p3b-text-to-sql
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Youssef320/finetuned_Roberta_newcode_5epoch-f1score | Youssef320 | "2023-09-04T20:03:16Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-04T18:02:44Z" | ---
tags:
- generated_from_trainer
model-index:
- name: finetuned_Roberta_newcode_5epoch-f1score
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_Roberta_newcode_5epoch-f1score
This model is a fine-tuned version of [Youssef320/Reberta-emoji-finetuned-50label](https://huggingface.co/Youssef320/Reberta-emoji-finetuned-50label) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7389
- Top 1 Macro F1 Score: 0.1910
- Top 1 Weighted F1score: 0.2434
- Top 3 Macro F1 Score: 0.3680
- Top3 3 Weighted F1 Score : 0.4515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Top 1 Macro F1 Score | Top 1 Weighted F1score | Top 3 Macro F1 Score | Top3 3 Weighted F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:----------------------:|:--------------------:|:-------------------------:|
| 3.0337 | 0.14 | 64 | 2.8439 | 0.1703 | 0.2206 | 0.3484 | 0.4290 |
| 2.9343 | 0.27 | 128 | 2.7976 | 0.1792 | 0.2333 | 0.3610 | 0.4415 |
| 2.8978 | 0.41 | 192 | 2.7960 | 0.1830 | 0.2353 | 0.3638 | 0.4416 |
| 2.8719 | 0.54 | 256 | 2.7718 | 0.1847 | 0.2376 | 0.3631 | 0.4456 |
| 2.8862 | 0.68 | 320 | 2.7410 | 0.1844 | 0.2363 | 0.3659 | 0.4496 |
| 2.8835 | 0.81 | 384 | 2.7556 | 0.1830 | 0.2372 | 0.3644 | 0.4484 |
| 2.8682 | 0.95 | 448 | 2.7389 | 0.1910 | 0.2434 | 0.3680 | 0.4515 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.0
|
jackyqs/vits-aishell3-175-chinese | jackyqs | "2023-05-16T07:02:57Z" | 21 | 25 | transformers | [
"transformers",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-05-11T08:30:25Z" | ---
license: apache-2.0
language:
- zh
---
aishell3数据介绍:
希尔贝壳中文普通话语音数据库AISHELL-3的语音时长为85小时88035句,可做为多说话人合成系统。录制过程在安静室内环境中, 使用高保真麦克风(44.1kHz,16bit)。
218名来自中国不同口音区域的发言人参与录制。专业语音校对人员进行拼音和韵律标注,并通过严格质量检验,此数据库音字确率在98%以上。
vits模型介绍:
这是一个基于vits_chinese和aishell3 175人中文训练的预训练模型,可以直接用于微调语音克隆,大大缩短微调训练的时间。
该模型使用tesla T4 16G训练了大概2周,500K步,单人语音数据微调1-3小时,即可达到非常逼真的效果,是MOS值最接近真实值的一个模型。
该模型包含了两个模型文件,一个是D_AISHELL.pth,另外一个是G_AISHELL.pth,共同构成了预训练模型。
微调:
需要将这个两个模型文件放到utils.save_checkpoint目录下:
utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
推理:
使用通过个人语音数据微调后的G_AISHELL.pth即可。
utils.load_checkpoint("G_pretrained.pth", net_g, None) |
legraphista/Llama-3.2-1B-Instruct-IMat-GGUF | legraphista | "2024-09-25T21:28:49Z" | 215 | 0 | gguf | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"quantized",
"GGUF",
"quantization",
"imat",
"imatrix",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us",
"conversational"
] | text-generation | "2024-09-25T21:23:12Z" | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
extra_gated_button_content: Submit
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n\u201CAgreement\u201D means the terms and\
\ conditions for use, reproduction, distribution and modification of the Llama\
\ Materials set forth herein.\n\n\u201CDocumentation\u201D means the specifications,\
\ manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n\u201CLicensee\u201D or \u201Cyou\u201D means you, or your employer or any other\
\ person or entity (if you are entering into this Agreement on such person or entity\u2019\
s behalf), of the age required under applicable laws, rules or regulations to provide\
\ legal consent and that has legal authority to bind your employer or such other\
\ person or entity if you are entering in this Agreement on their behalf.\n\n\u201C\
Llama 3.2\u201D means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\
\n\u201CLlama Materials\u201D means, collectively, Meta\u2019s proprietary Llama\
\ 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\
\n\u201CMeta\u201D or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you\
\ are located in or, if you are an entity, your principal place of business is\
\ in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside\
\ of the EEA or Switzerland). \n\nBy clicking \u201CI Accept\u201D below or by using\
\ or distributing any portion or element of the Llama Materials, you agree to be\
\ bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of\
\ Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free\
\ limited license under Meta\u2019s intellectual property or other rights owned\
\ by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create\
\ derivative works of, and make modifications to the Llama Materials. \nb. Redistribution\
\ and Use. \ni. If you distribute or make available the Llama Materials (or any\
\ derivative works thereof), or a product or service (including another AI model)\
\ that contains any of them, you shall (A) provide a copy of this Agreement with\
\ any such Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include \u201CLlama\u201D at the beginning of\
\ any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you. \niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a \u201CNotice\u201D text file distributed as a part of such copies: \u201C\
Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright \xA9 Meta\
\ Platforms, Inc. All Rights Reserved.\u201D\niv. Your use of the Llama Materials\
\ must comply with applicable laws and regulations (including trade compliance laws\
\ and regulations) and adhere to the Acceptable Use Policy for the Llama Materials\
\ (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated\
\ by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on\
\ the Llama 3.2 version release date, the monthly active users of the products or\
\ services made available by or for Licensee, or Licensee\u2019s affiliates, is\
\ greater than 700 million monthly active users in the preceding calendar month,\
\ you must request a license from Meta, which Meta may grant to you in its sole\
\ discretion, and you are not authorized to exercise any of the rights under this\
\ Agreement unless or until Meta otherwise expressly grants you such rights.\n3.\
\ Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS\
\ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS,\
\ WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND,\
\ BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE,\
\ NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE\
\ SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER\
\ IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF\
\ THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ \u201CLlama\u201D (the \u201CMark\u201D) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines\
\ (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/).\
\ All goodwill arising out of your use of the Mark will inure to the benefit of\
\ Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives\
\ made by or for Meta, with respect to any derivative works and modifications of\
\ the Llama Materials that are made by you, as between you and Meta, you are and\
\ will be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (\u201C**Policy**\u201D). The most recent copy of this policy can be found at\
\ [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals\u2019 identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta\_\n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement\_\n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software \u201Cbug,\u201D\
\ or other problems that could lead to a violation of this Policy through one of\
\ the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
inference: false
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: gguf
license: llama3.2
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Llama-3.2-1B-Instruct-IMat-GGUF
_Llama.cpp imatrix quantization of meta-llama/Llama-3.2-1B-Instruct_
Original Model: [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3825](https://github.com/ggerganov/llama.cpp/releases/tag/b3825)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Llama-3.2-1B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q8_0.gguf) | Q8_0 | 1.32GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q6_K.gguf) | Q6_K | 1.02GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q4_K.gguf) | Q4_K | 807.69MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K.gguf) | Q3_K | 690.84MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q2_K.gguf) | Q2_K | 580.87MB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Llama-3.2-1B-Instruct.BF16.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.BF16.gguf) | BF16 | 2.48GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.FP16.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.FP16.gguf) | F16 | 2.48GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q8_0.gguf) | Q8_0 | 1.32GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q6_K.gguf) | Q6_K | 1.02GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q5_K.gguf) | Q5_K | 911.50MB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q5_K_S.gguf) | Q5_K_S | 892.56MB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3.2-1B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q4_K.gguf) | Q4_K | 807.69MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q4_K_S.gguf) | Q4_K_S | 775.65MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ4_NL.gguf) | IQ4_NL | 773.03MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ4_XS.gguf) | IQ4_XS | 743.14MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K.gguf) | Q3_K | 690.84MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K_L.gguf) | Q3_K_L | 732.52MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q3_K_S.gguf) | Q3_K_S | 641.69MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_M.gguf) | IQ3_M | 657.29MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_S.gguf) | IQ3_S | 643.92MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_XS.gguf) | IQ3_XS | 621.11MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 562.11MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q2_K.gguf) | Q2_K | 580.87MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.Q2_K_S.gguf) | Q2_K_S | 554.66MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_M.gguf) | IQ2_M | 515.45MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_S.gguf) | IQ2_S | 488.71MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_XS.gguf) | IQ2_XS | 475.87MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 447.03MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ1_M.gguf) | IQ1_M | 413.61MB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3.2-1B-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Llama-3.2-1B-Instruct-IMat-GGUF/blob/main/Llama-3.2-1B-Instruct.IQ1_S.gguf) | IQ1_S | 393.55MB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Llama-3.2-1B-Instruct-IMat-GGUF --include "Llama-3.2-1B-Instruct.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Llama-3.2-1B-Instruct-IMat-GGUF --include "Llama-3.2-1B-Instruct.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
<|eot_id|><|start_header_id|>user<|end_header_id|>
{user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
{next_user_prompt}<|eot_id|>
```
### Chat template with system prompt
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
{next_user_prompt}<|eot_id|>
```
### Llama.cpp
```
llama.cpp/main -m Llama-3.2-1B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Llama-3.2-1B-Instruct.Q8_0`)
3. Run `gguf-split --merge Llama-3.2-1B-Instruct.Q8_0/Llama-3.2-1B-Instruct.Q8_0-00001-of-XXXXX.gguf Llama-3.2-1B-Instruct.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
Subsets and Splits