modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-26 06:27:27
| downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 397
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-26 06:27:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
altomek/PLLuM-12B-chat-GGUF | altomek | "2025-02-27T18:53:31Z" | 0 | 0 | null | [
"gguf",
"pl",
"base_model:CYFRAGOVPL/PLLuM-12B-chat",
"base_model:quantized:CYFRAGOVPL/PLLuM-12B-chat",
"license:apache-2.0",
"region:us",
"conversational"
] | null | "2025-02-27T15:25:08Z" | ---
base_model: CYFRAGOVPL/PLLuM-12B-chat
license: apache-2.0
language:
- pl
inference: false
---
# PLLuM-12B-chat
Selected GGUF quants of https://huggingface.co/CYFRAGOVPL/PLLuM-12B-chat |
JoeLisk/t5-csgo-trajectory | JoeLisk | "2024-12-18T01:53:28Z" | 175 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-18T01:53:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dzanbek/2a886994-9780-47ab-aa54-c59f3d19c581 | dzanbek | "2025-01-23T23:11:04Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | "2025-01-23T22:34:13Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a886994-9780-47ab-aa54-c59f3d19c581
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd8cb244a58eab5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd8cb244a58eab5a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dzanbek/2a886994-9780-47ab-aa54-c59f3d19c581
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fd8cb244a58eab5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 41c1cfc2-4854-41dd-aa2c-b18f0b6c6123
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 41c1cfc2-4854-41dd-aa2c-b18f0b6c6123
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2a886994-9780-47ab-aa54-c59f3d19c581
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.3121 |
| 5.0128 | 0.0008 | 5 | 1.2831 |
| 4.827 | 0.0016 | 10 | 1.1430 |
| 4.3899 | 0.0024 | 15 | 1.1015 |
| 4.1246 | 0.0033 | 20 | 1.0747 |
| 4.2226 | 0.0041 | 25 | 1.0612 |
| 4.1296 | 0.0049 | 30 | 1.0587 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrcolley/dummy | mrcolley | "2024-11-11T20:34:57Z" | 118 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-11-11T20:15:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
whj9068/Mar25_gpt_6_s | whj9068 | "2025-03-26T00:46:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-26T00:45:58Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** whj9068
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Diabetica-7B-GGUF | mradermacher | "2025-03-15T07:54:36Z" | 390 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"en",
"dataset:WaltonFuture/Diabetica-SFT",
"base_model:WaltonFuture/Diabetica-7B",
"base_model:quantized:WaltonFuture/Diabetica-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-07T03:51:47Z" | ---
base_model: WaltonFuture/Diabetica-7B
datasets:
- WaltonFuture/Diabetica-SFT
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/WaltonFuture/Diabetica-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Diabetica-7B-GGUF/resolve/main/Diabetica-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TodoP/Prueba | TodoP | "2024-06-17T21:49:33Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-17T21:49:33Z" | ---
license: apache-2.0
---
|
John6666/wai-real-cn-v13-sdxl | John6666 | "2024-12-23T06:39:20Z" | 199 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-10-18T05:03:29Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- pony
---
Original model is [here](https://civitai.com/models/469902/wai-realcn?modelVersionId=966009).
This model created by [WAI0731](https://civitai.com/user/WAI0731). |
facebook/sapiens-pose-0.6b | facebook | "2024-10-07T19:44:54Z" | 31 | 2 | sapiens | [
"sapiens",
"pose-estimation",
"keypoint-detection",
"en",
"arxiv:2408.12569",
"license:cc-by-nc-4.0",
"region:us"
] | keypoint-detection | "2024-09-18T18:44:23Z" | ---
language: en
license: cc-by-nc-4.0
pipeline_tag: keypoint-detection
tags:
- sapiens
- pose-estimation
---
# Pose-Sapiens-0.6B
### Model Details
Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions.
Sapiens-0.6B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.
- **Developed by:** Meta
- **Model type:** Vision Transformer
- **License:** Creative Commons Attribution-NonCommercial 4.0
- **Task:** pose
- **Format:** original
- **File:** sapiens_0.6b_goliath_best_goliath_AP_609.pth
### Model Card
- **Image Size:** 1024 x 768 (H x W)
- **Num Parameters:** 0.664 B
- **FLOPs:** 2.583 TFLOPs
- **Patch Size:** 16 x 16
- **Embedding Dimensions:** 1280
- **Num Layers:** 32
- **Num Heads:** 16
- **Feedforward Channels:** 5120
### More Resources
- **Repository:** [https://github.com/facebookresearch/sapiens](https://github.com/facebookresearch/sapiens)
- **Paper:** [https://arxiv.org/abs/2408.12569](https://arxiv.org/abs/2408.12569)
- **Demo:** [https://huggingface.co/spaces/facebook/sapiens-pose](https://huggingface.co/spaces/facebook/sapiens-pose)
- **Project Page:** [https://about.meta.com/realitylabs/codecavatars/sapiens](https://about.meta.com/realitylabs/codecavatars/sapiens/)
- **Additional Results:** [https://rawalkhirodkar.github.io/sapiens](https://rawalkhirodkar.github.io/sapiens/)
- **HuggingFace Collection:** [https://huggingface.co/collections/facebook/sapiens-66d22047daa6402d565cb2fc](https://huggingface.co/collections/facebook/sapiens-66d22047daa6402d565cb2fc)
## Uses
Pose 0.6B model can be used for estimate 308 keypoints (body + face + hands + feet) on a single image. |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k25_task3_organization_fold0 | MayBashendy | "2024-11-25T11:20:18Z" | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-25T11:09:46Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k25_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k25_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9861
- Qwk: 0.0530
- Mse: 0.9861
- Rmse: 0.9930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0164 | 2 | 4.3257 | 0.0 | 4.3257 | 2.0798 |
| No log | 0.0328 | 4 | 2.4358 | -0.0722 | 2.4358 | 1.5607 |
| No log | 0.0492 | 6 | 1.5643 | 0.0 | 1.5643 | 1.2507 |
| No log | 0.0656 | 8 | 1.6200 | 0.0 | 1.6200 | 1.2728 |
| No log | 0.0820 | 10 | 1.4691 | 0.0 | 1.4691 | 1.2121 |
| No log | 0.0984 | 12 | 1.4667 | 0.0 | 1.4667 | 1.2111 |
| No log | 0.1148 | 14 | 1.7052 | 0.0 | 1.7052 | 1.3058 |
| No log | 0.1311 | 16 | 1.4637 | 0.0 | 1.4637 | 1.2098 |
| No log | 0.1475 | 18 | 1.0913 | 0.3293 | 1.0913 | 1.0447 |
| No log | 0.1639 | 20 | 1.0574 | 0.0320 | 1.0574 | 1.0283 |
| No log | 0.1803 | 22 | 1.3219 | -0.0296 | 1.3219 | 1.1497 |
| No log | 0.1967 | 24 | 2.1924 | 0.0 | 2.1924 | 1.4807 |
| No log | 0.2131 | 26 | 2.4574 | 0.0866 | 2.4574 | 1.5676 |
| No log | 0.2295 | 28 | 1.9983 | 0.0 | 1.9983 | 1.4136 |
| No log | 0.2459 | 30 | 1.7084 | 0.0 | 1.7084 | 1.3071 |
| No log | 0.2623 | 32 | 1.5042 | 0.0 | 1.5042 | 1.2264 |
| No log | 0.2787 | 34 | 1.3356 | 0.0 | 1.3356 | 1.1557 |
| No log | 0.2951 | 36 | 1.1637 | 0.1921 | 1.1637 | 1.0788 |
| No log | 0.3115 | 38 | 1.1118 | 0.0530 | 1.1118 | 1.0544 |
| No log | 0.3279 | 40 | 1.0472 | 0.0 | 1.0472 | 1.0233 |
| No log | 0.3443 | 42 | 1.0619 | -0.375 | 1.0619 | 1.0305 |
| No log | 0.3607 | 44 | 1.2018 | -0.0296 | 1.2018 | 1.0963 |
| No log | 0.3770 | 46 | 1.5197 | 0.0 | 1.5197 | 1.2328 |
| No log | 0.3934 | 48 | 1.4381 | 0.0873 | 1.4381 | 1.1992 |
| No log | 0.4098 | 50 | 1.3463 | 0.0873 | 1.3463 | 1.1603 |
| No log | 0.4262 | 52 | 1.2107 | 0.0678 | 1.2107 | 1.1003 |
| No log | 0.4426 | 54 | 1.2140 | 0.0788 | 1.2140 | 1.1018 |
| No log | 0.4590 | 56 | 1.1763 | -0.0565 | 1.1763 | 1.0846 |
| No log | 0.4754 | 58 | 1.3128 | 0.0833 | 1.3128 | 1.1458 |
| No log | 0.4918 | 60 | 1.5930 | 0.0873 | 1.5930 | 1.2621 |
| No log | 0.5082 | 62 | 2.1622 | 0.0873 | 2.1622 | 1.4705 |
| No log | 0.5246 | 64 | 2.2981 | 0.0 | 2.2981 | 1.5160 |
| No log | 0.5410 | 66 | 1.8783 | 0.0873 | 1.8783 | 1.3705 |
| No log | 0.5574 | 68 | 1.3058 | 0.0833 | 1.3058 | 1.1427 |
| No log | 0.5738 | 70 | 1.0576 | -0.0927 | 1.0576 | 1.0284 |
| No log | 0.5902 | 72 | 1.0610 | -0.0927 | 1.0610 | 1.0301 |
| No log | 0.6066 | 74 | 1.2635 | -0.0296 | 1.2635 | 1.1241 |
| No log | 0.6230 | 76 | 1.3947 | 0.0833 | 1.3947 | 1.1810 |
| No log | 0.6393 | 78 | 1.5163 | 0.0833 | 1.5163 | 1.2314 |
| No log | 0.6557 | 80 | 1.4084 | 0.0833 | 1.4084 | 1.1868 |
| No log | 0.6721 | 82 | 1.2679 | -0.0296 | 1.2679 | 1.1260 |
| No log | 0.6885 | 84 | 0.9738 | 0.384 | 0.9738 | 0.9868 |
| No log | 0.7049 | 86 | 0.9616 | 0.2143 | 0.9616 | 0.9806 |
| No log | 0.7213 | 88 | 1.1608 | -0.0732 | 1.1608 | 1.0774 |
| No log | 0.7377 | 90 | 1.5110 | -0.0185 | 1.5110 | 1.2292 |
| No log | 0.7541 | 92 | 1.6383 | -0.0185 | 1.6383 | 1.2799 |
| No log | 0.7705 | 94 | 1.6765 | -0.0421 | 1.6765 | 1.2948 |
| No log | 0.7869 | 96 | 1.6658 | -0.1379 | 1.6658 | 1.2907 |
| No log | 0.8033 | 98 | 1.8144 | -0.0087 | 1.8144 | 1.3470 |
| No log | 0.8197 | 100 | 1.7471 | 0.0833 | 1.7471 | 1.3218 |
| No log | 0.8361 | 102 | 1.3595 | -0.0296 | 1.3595 | 1.1660 |
| No log | 0.8525 | 104 | 1.0257 | -0.1159 | 1.0257 | 1.0128 |
| No log | 0.8689 | 106 | 0.9539 | 0.2080 | 0.9539 | 0.9767 |
| No log | 0.8852 | 108 | 0.8952 | 0.2080 | 0.8952 | 0.9462 |
| No log | 0.9016 | 110 | 0.9483 | 0.0530 | 0.9483 | 0.9738 |
| No log | 0.9180 | 112 | 0.8724 | 0.3623 | 0.8724 | 0.9340 |
| No log | 0.9344 | 114 | 0.7956 | 0.2143 | 0.7956 | 0.8920 |
| No log | 0.9508 | 116 | 1.0434 | 0.0435 | 1.0434 | 1.0215 |
| No log | 0.9672 | 118 | 0.8516 | 0.2080 | 0.8516 | 0.9228 |
| No log | 0.9836 | 120 | 0.9413 | 0.3444 | 0.9413 | 0.9702 |
| No log | 1.0 | 122 | 1.3282 | 0.2814 | 1.3282 | 1.1525 |
| No log | 1.0164 | 124 | 0.9231 | 0.1951 | 0.9231 | 0.9608 |
| No log | 1.0328 | 126 | 0.7568 | 0.2143 | 0.7568 | 0.8699 |
| No log | 1.0492 | 128 | 0.8429 | -0.2222 | 0.8429 | 0.9181 |
| No log | 1.0656 | 130 | 0.7325 | 0.2143 | 0.7325 | 0.8559 |
| No log | 1.0820 | 132 | 1.0143 | 0.3006 | 1.0143 | 1.0071 |
| No log | 1.0984 | 134 | 1.3174 | -0.0421 | 1.3174 | 1.1478 |
| No log | 1.1148 | 136 | 1.1691 | 0.1951 | 1.1691 | 1.0813 |
| No log | 1.1311 | 138 | 0.9329 | 0.1987 | 0.9329 | 0.9659 |
| No log | 1.1475 | 140 | 0.8729 | 0.0435 | 0.8729 | 0.9343 |
| No log | 1.1639 | 142 | 0.9768 | 0.0435 | 0.9768 | 0.9883 |
| No log | 1.1803 | 144 | 1.0910 | 0.1987 | 1.0910 | 1.0445 |
| No log | 1.1967 | 146 | 1.1476 | 0.1987 | 1.1476 | 1.0713 |
| No log | 1.2131 | 148 | 1.1537 | 0.0435 | 1.1537 | 1.0741 |
| No log | 1.2295 | 150 | 1.0706 | 0.0435 | 1.0706 | 1.0347 |
| No log | 1.2459 | 152 | 1.1074 | 0.0435 | 1.1074 | 1.0523 |
| No log | 1.2623 | 154 | 1.4710 | -0.0377 | 1.4710 | 1.2129 |
| No log | 1.2787 | 156 | 2.2908 | 0.1418 | 2.2908 | 1.5136 |
| No log | 1.2951 | 158 | 2.8174 | 0.0217 | 2.8174 | 1.6785 |
| No log | 1.3115 | 160 | 2.4494 | 0.0748 | 2.4494 | 1.5651 |
| No log | 1.3279 | 162 | 1.4372 | 0.0603 | 1.4372 | 1.1988 |
| No log | 1.3443 | 164 | 0.8899 | 0.3077 | 0.8899 | 0.9433 |
| No log | 1.3607 | 166 | 0.8303 | 0.1538 | 0.8303 | 0.9112 |
| No log | 1.3770 | 168 | 0.8554 | 0.1769 | 0.8554 | 0.9249 |
| No log | 1.3934 | 170 | 1.0425 | 0.1987 | 1.0425 | 1.0210 |
| No log | 1.4098 | 172 | 1.1709 | 0.1951 | 1.1709 | 1.0821 |
| No log | 1.4262 | 174 | 1.1002 | 0.3125 | 1.1002 | 1.0489 |
| No log | 1.4426 | 176 | 0.9768 | 0.1333 | 0.9768 | 0.9883 |
| No log | 1.4590 | 178 | 1.0138 | 0.1333 | 1.0138 | 1.0069 |
| No log | 1.4754 | 180 | 1.0447 | 0.0128 | 1.0447 | 1.0221 |
| No log | 1.4918 | 182 | 1.0851 | 0.0272 | 1.0851 | 1.0417 |
| No log | 1.5082 | 184 | 1.1139 | 0.0435 | 1.1139 | 1.0554 |
| No log | 1.5246 | 186 | 1.0848 | 0.0435 | 1.0848 | 1.0415 |
| No log | 1.5410 | 188 | 1.1094 | 0.0435 | 1.1094 | 1.0533 |
| No log | 1.5574 | 190 | 1.0636 | 0.0435 | 1.0636 | 1.0313 |
| No log | 1.5738 | 192 | 0.9789 | 0.2080 | 0.9789 | 0.9894 |
| No log | 1.5902 | 194 | 0.9588 | 0.2080 | 0.9588 | 0.9792 |
| No log | 1.6066 | 196 | 1.0257 | 0.2080 | 1.0257 | 1.0127 |
| No log | 1.6230 | 198 | 1.0708 | 0.3433 | 1.0708 | 1.0348 |
| No log | 1.6393 | 200 | 1.0069 | 0.2080 | 1.0069 | 1.0034 |
| No log | 1.6557 | 202 | 0.9268 | 0.0 | 0.9268 | 0.9627 |
| No log | 1.6721 | 204 | 0.9807 | 0.1118 | 0.9807 | 0.9903 |
| No log | 1.6885 | 206 | 1.0662 | 0.1333 | 1.0662 | 1.0326 |
| No log | 1.7049 | 208 | 1.1681 | 0.3006 | 1.1681 | 1.0808 |
| No log | 1.7213 | 210 | 1.3267 | 0.1951 | 1.3267 | 1.1518 |
| No log | 1.7377 | 212 | 1.1641 | 0.1951 | 1.1641 | 1.0789 |
| No log | 1.7541 | 214 | 0.9161 | 0.1295 | 0.9161 | 0.9571 |
| No log | 1.7705 | 216 | 0.9329 | -0.0593 | 0.9329 | 0.9658 |
| No log | 1.7869 | 218 | 0.9053 | -0.0593 | 0.9053 | 0.9515 |
| No log | 1.8033 | 220 | 0.8565 | 0.2763 | 0.8565 | 0.9255 |
| No log | 1.8197 | 222 | 0.8685 | 0.2763 | 0.8685 | 0.9319 |
| No log | 1.8361 | 224 | 0.8219 | 0.1295 | 0.8219 | 0.9066 |
| No log | 1.8525 | 226 | 0.8727 | -0.0694 | 0.8727 | 0.9342 |
| No log | 1.8689 | 228 | 0.9280 | 0.2092 | 0.9280 | 0.9633 |
| No log | 1.8852 | 230 | 0.8457 | -0.0694 | 0.8457 | 0.9196 |
| No log | 1.9016 | 232 | 0.7434 | 0.1538 | 0.7434 | 0.8622 |
| No log | 1.9180 | 234 | 0.7411 | 0.1818 | 0.7411 | 0.8609 |
| No log | 1.9344 | 236 | 0.7413 | 0.1037 | 0.7413 | 0.8610 |
| No log | 1.9508 | 238 | 0.7888 | -0.0694 | 0.7888 | 0.8881 |
| No log | 1.9672 | 240 | 0.8774 | 0.2048 | 0.8774 | 0.9367 |
| No log | 1.9836 | 242 | 1.0561 | 0.2048 | 1.0561 | 1.0277 |
| No log | 2.0 | 244 | 1.1221 | 0.2804 | 1.1221 | 1.0593 |
| No log | 2.0164 | 246 | 1.1991 | 0.2936 | 1.1991 | 1.0950 |
| No log | 2.0328 | 248 | 1.2104 | 0.2731 | 1.2104 | 1.1002 |
| No log | 2.0492 | 250 | 1.2496 | 0.0571 | 1.2496 | 1.1179 |
| No log | 2.0656 | 252 | 1.2234 | 0.0571 | 1.2234 | 1.1061 |
| No log | 2.0820 | 254 | 1.1707 | 0.0571 | 1.1707 | 1.0820 |
| No log | 2.0984 | 256 | 1.0859 | 0.0571 | 1.0859 | 1.0421 |
| No log | 2.1148 | 258 | 1.1855 | 0.0538 | 1.1855 | 1.0888 |
| No log | 2.1311 | 260 | 1.3622 | 0.0538 | 1.3622 | 1.1671 |
| No log | 2.1475 | 262 | 1.3908 | 0.0538 | 1.3908 | 1.1793 |
| No log | 2.1639 | 264 | 1.1464 | 0.0530 | 1.1464 | 1.0707 |
| No log | 2.1803 | 266 | 0.8765 | 0.0320 | 0.8765 | 0.9362 |
| No log | 2.1967 | 268 | 0.8297 | 0.0320 | 0.8297 | 0.9109 |
| No log | 2.2131 | 270 | 0.9082 | 0.0320 | 0.9082 | 0.9530 |
| No log | 2.2295 | 272 | 1.1835 | 0.0530 | 1.1835 | 1.0879 |
| No log | 2.2459 | 274 | 1.8738 | 0.0797 | 1.8738 | 1.3689 |
| No log | 2.2623 | 276 | 1.9556 | 0.0769 | 1.9556 | 1.3984 |
| No log | 2.2787 | 278 | 1.3106 | 0.1951 | 1.3106 | 1.1448 |
| No log | 2.2951 | 280 | 0.8827 | 0.3265 | 0.8827 | 0.9395 |
| No log | 2.3115 | 282 | 0.8389 | 0.1295 | 0.8389 | 0.9159 |
| No log | 2.3279 | 284 | 0.8268 | 0.1295 | 0.8268 | 0.9093 |
| No log | 2.3443 | 286 | 0.8582 | 0.3433 | 0.8582 | 0.9264 |
| No log | 2.3607 | 288 | 1.0442 | 0.0530 | 1.0442 | 1.0219 |
| No log | 2.3770 | 290 | 1.1042 | 0.1951 | 1.1042 | 1.0508 |
| No log | 2.3934 | 292 | 1.0667 | 0.1951 | 1.0667 | 1.0328 |
| No log | 2.4098 | 294 | 0.9334 | 0.0530 | 0.9334 | 0.9661 |
| No log | 2.4262 | 296 | 0.8274 | 0.2763 | 0.8274 | 0.9096 |
| No log | 2.4426 | 298 | 0.8461 | -0.0405 | 0.8461 | 0.9198 |
| No log | 2.4590 | 300 | 0.8516 | -0.0405 | 0.8516 | 0.9228 |
| No log | 2.4754 | 302 | 0.8468 | -0.0154 | 0.8468 | 0.9202 |
| No log | 2.4918 | 304 | 0.9002 | 0.2029 | 0.9002 | 0.9488 |
| No log | 2.5082 | 306 | 1.0260 | 0.0530 | 1.0260 | 1.0129 |
| No log | 2.5246 | 308 | 1.1061 | 0.0530 | 1.1061 | 1.0517 |
| No log | 2.5410 | 310 | 1.0819 | 0.0530 | 1.0819 | 1.0401 |
| No log | 2.5574 | 312 | 0.9318 | 0.2080 | 0.9318 | 0.9653 |
| No log | 2.5738 | 314 | 0.8778 | 0.0179 | 0.8778 | 0.9369 |
| No log | 2.5902 | 316 | 0.8797 | 0.0179 | 0.8797 | 0.9379 |
| No log | 2.6066 | 318 | 0.9181 | 0.0179 | 0.9181 | 0.9582 |
| No log | 2.6230 | 320 | 0.9960 | 0.0530 | 0.9960 | 0.9980 |
| No log | 2.6393 | 322 | 1.0174 | 0.0530 | 1.0174 | 1.0087 |
| No log | 2.6557 | 324 | 1.0687 | 0.0530 | 1.0687 | 1.0338 |
| No log | 2.6721 | 326 | 1.0392 | 0.0530 | 1.0392 | 1.0194 |
| No log | 2.6885 | 328 | 1.0038 | -0.1159 | 1.0038 | 1.0019 |
| No log | 2.7049 | 330 | 0.9913 | 0.0530 | 0.9913 | 0.9956 |
| No log | 2.7213 | 332 | 1.0188 | 0.0530 | 1.0188 | 1.0093 |
| No log | 2.7377 | 334 | 1.1421 | 0.0530 | 1.1421 | 1.0687 |
| No log | 2.7541 | 336 | 1.0107 | 0.0530 | 1.0107 | 1.0053 |
| No log | 2.7705 | 338 | 0.8681 | -0.0342 | 0.8681 | 0.9317 |
| No log | 2.7869 | 340 | 0.9097 | 0.1270 | 0.9097 | 0.9538 |
| No log | 2.8033 | 342 | 0.9123 | 0.1270 | 0.9123 | 0.9552 |
| No log | 2.8197 | 344 | 0.8508 | 0.1270 | 0.8508 | 0.9224 |
| No log | 2.8361 | 346 | 0.8650 | 0.0435 | 0.8650 | 0.9301 |
| No log | 2.8525 | 348 | 0.8617 | 0.0435 | 0.8617 | 0.9283 |
| No log | 2.8689 | 350 | 0.8551 | 0.0435 | 0.8551 | 0.9247 |
| No log | 2.8852 | 352 | 0.8067 | 0.2143 | 0.8067 | 0.8982 |
| No log | 2.9016 | 354 | 0.7881 | 0.0 | 0.7881 | 0.8878 |
| No log | 2.9180 | 356 | 0.7760 | 0.2143 | 0.7760 | 0.8809 |
| No log | 2.9344 | 358 | 0.7790 | 0.0320 | 0.7790 | 0.8826 |
| No log | 2.9508 | 360 | 0.7559 | 0.2143 | 0.7559 | 0.8694 |
| No log | 2.9672 | 362 | 0.7533 | 0.0 | 0.7533 | 0.8680 |
| No log | 2.9836 | 364 | 0.7822 | 0.1538 | 0.7822 | 0.8844 |
| No log | 3.0 | 366 | 0.7619 | 0.1852 | 0.7619 | 0.8728 |
| No log | 3.0164 | 368 | 0.8316 | 0.1987 | 0.8316 | 0.9119 |
| No log | 3.0328 | 370 | 0.9734 | 0.1951 | 0.9734 | 0.9866 |
| No log | 3.0492 | 372 | 0.9756 | 0.1951 | 0.9756 | 0.9877 |
| No log | 3.0656 | 374 | 0.9613 | 0.1951 | 0.9613 | 0.9805 |
| No log | 3.0820 | 376 | 0.9968 | 0.1951 | 0.9968 | 0.9984 |
| No log | 3.0984 | 378 | 0.9961 | 0.1951 | 0.9961 | 0.9981 |
| No log | 3.1148 | 380 | 0.9099 | 0.1951 | 0.9099 | 0.9539 |
| No log | 3.1311 | 382 | 0.8364 | 0.0320 | 0.8364 | 0.9145 |
| No log | 3.1475 | 384 | 0.8411 | 0.0320 | 0.8411 | 0.9171 |
| No log | 3.1639 | 386 | 0.9024 | 0.0530 | 0.9024 | 0.9499 |
| No log | 3.1803 | 388 | 1.1334 | 0.1951 | 1.1334 | 1.0646 |
| No log | 3.1967 | 390 | 1.2754 | 0.1709 | 1.2754 | 1.1293 |
| No log | 3.2131 | 392 | 1.1153 | 0.1951 | 1.1153 | 1.0561 |
| No log | 3.2295 | 394 | 0.9189 | 0.0128 | 0.9189 | 0.9586 |
| No log | 3.2459 | 396 | 0.9071 | 0.1295 | 0.9071 | 0.9524 |
| No log | 3.2623 | 398 | 0.8913 | -0.0342 | 0.8913 | 0.9441 |
| No log | 3.2787 | 400 | 0.8765 | 0.0179 | 0.8765 | 0.9362 |
| No log | 3.2951 | 402 | 0.9651 | 0.0530 | 0.9651 | 0.9824 |
| No log | 3.3115 | 404 | 1.1242 | 0.1951 | 1.1242 | 1.0603 |
| No log | 3.3279 | 406 | 1.0507 | 0.0530 | 1.0507 | 1.0250 |
| No log | 3.3443 | 408 | 0.9126 | 0.0435 | 0.9126 | 0.9553 |
| No log | 3.3607 | 410 | 0.8986 | -0.1786 | 0.8986 | 0.9479 |
| No log | 3.3770 | 412 | 0.9279 | -0.1440 | 0.9279 | 0.9633 |
| No log | 3.3934 | 414 | 0.9767 | 0.0435 | 0.9767 | 0.9883 |
| No log | 3.4098 | 416 | 0.9858 | 0.0435 | 0.9858 | 0.9929 |
| No log | 3.4262 | 418 | 1.0458 | 0.0530 | 1.0458 | 1.0226 |
| No log | 3.4426 | 420 | 1.1871 | 0.1921 | 1.1871 | 1.0895 |
| No log | 3.4590 | 422 | 1.2378 | 0.1921 | 1.2378 | 1.1125 |
| No log | 3.4754 | 424 | 1.0707 | 0.1921 | 1.0707 | 1.0347 |
| No log | 3.4918 | 426 | 1.0402 | 0.1951 | 1.0402 | 1.0199 |
| No log | 3.5082 | 428 | 1.0466 | 0.1951 | 1.0466 | 1.0231 |
| No log | 3.5246 | 430 | 1.2409 | 0.1921 | 1.2409 | 1.1140 |
| No log | 3.5410 | 432 | 1.3431 | 0.1921 | 1.3431 | 1.1589 |
| No log | 3.5574 | 434 | 1.2329 | 0.1921 | 1.2329 | 1.1104 |
| No log | 3.5738 | 436 | 0.9816 | 0.0375 | 0.9816 | 0.9907 |
| No log | 3.5902 | 438 | 0.9088 | -0.1493 | 0.9088 | 0.9533 |
| No log | 3.6066 | 440 | 0.8946 | -0.1493 | 0.8946 | 0.9458 |
| No log | 3.6230 | 442 | 0.9398 | 0.1951 | 0.9398 | 0.9694 |
| No log | 3.6393 | 444 | 1.0548 | 0.1951 | 1.0548 | 1.0271 |
| No log | 3.6557 | 446 | 1.2093 | 0.1921 | 1.2093 | 1.0997 |
| No log | 3.6721 | 448 | 1.2358 | 0.1921 | 1.2358 | 1.1117 |
| No log | 3.6885 | 450 | 1.1569 | 0.1921 | 1.1569 | 1.0756 |
| No log | 3.7049 | 452 | 0.9748 | 0.1951 | 0.9748 | 0.9873 |
| No log | 3.7213 | 454 | 0.9326 | 0.1951 | 0.9326 | 0.9657 |
| No log | 3.7377 | 456 | 0.9201 | -0.0132 | 0.9201 | 0.9592 |
| No log | 3.7541 | 458 | 0.9337 | -0.0132 | 0.9337 | 0.9663 |
| No log | 3.7705 | 460 | 0.9261 | -0.0132 | 0.9261 | 0.9623 |
| No log | 3.7869 | 462 | 0.9096 | 0.0 | 0.9096 | 0.9537 |
| No log | 3.8033 | 464 | 0.9266 | 0.1951 | 0.9266 | 0.9626 |
| No log | 3.8197 | 466 | 0.9703 | 0.1951 | 0.9703 | 0.9851 |
| No log | 3.8361 | 468 | 0.9115 | 0.1951 | 0.9115 | 0.9547 |
| No log | 3.8525 | 470 | 0.9084 | 0.1987 | 0.9084 | 0.9531 |
| No log | 3.8689 | 472 | 0.9412 | 0.1951 | 0.9412 | 0.9702 |
| No log | 3.8852 | 474 | 0.9152 | 0.1951 | 0.9152 | 0.9567 |
| No log | 3.9016 | 476 | 0.9411 | 0.1951 | 0.9411 | 0.9701 |
| No log | 3.9180 | 478 | 0.8654 | -0.1440 | 0.8654 | 0.9303 |
| No log | 3.9344 | 480 | 0.8304 | -0.1440 | 0.8304 | 0.9113 |
| No log | 3.9508 | 482 | 0.8145 | 0.0179 | 0.8145 | 0.9025 |
| No log | 3.9672 | 484 | 0.8135 | 0.0179 | 0.8135 | 0.9019 |
| No log | 3.9836 | 486 | 0.8109 | 0.0179 | 0.8109 | 0.9005 |
| No log | 4.0 | 488 | 0.7997 | 0.0179 | 0.7997 | 0.8943 |
| No log | 4.0164 | 490 | 0.7937 | 0.0179 | 0.7937 | 0.8909 |
| No log | 4.0328 | 492 | 0.8210 | -0.1440 | 0.8210 | 0.9061 |
| No log | 4.0492 | 494 | 0.8854 | 0.1987 | 0.8854 | 0.9410 |
| No log | 4.0656 | 496 | 0.8691 | 0.1987 | 0.8691 | 0.9323 |
| No log | 4.0820 | 498 | 0.8048 | -0.1440 | 0.8048 | 0.8971 |
| 0.4085 | 4.0984 | 500 | 0.7871 | 0.0179 | 0.7871 | 0.8872 |
| 0.4085 | 4.1148 | 502 | 0.8288 | 0.1538 | 0.8288 | 0.9104 |
| 0.4085 | 4.1311 | 504 | 0.8675 | 0.1295 | 0.8675 | 0.9314 |
| 0.4085 | 4.1475 | 506 | 0.9310 | -0.0154 | 0.9310 | 0.9649 |
| 0.4085 | 4.1639 | 508 | 1.1152 | 0.0538 | 1.1152 | 1.0560 |
| 0.4085 | 4.1803 | 510 | 1.1659 | 0.0538 | 1.1659 | 1.0798 |
| 0.4085 | 4.1967 | 512 | 1.0646 | 0.0330 | 1.0646 | 1.0318 |
| 0.4085 | 4.2131 | 514 | 0.9120 | 0.1538 | 0.9120 | 0.9550 |
| 0.4085 | 4.2295 | 516 | 0.8310 | 0.1538 | 0.8310 | 0.9116 |
| 0.4085 | 4.2459 | 518 | 0.7707 | 0.0 | 0.7707 | 0.8779 |
| 0.4085 | 4.2623 | 520 | 0.7515 | 0.2080 | 0.7515 | 0.8669 |
| 0.4085 | 4.2787 | 522 | 0.7595 | 0.2080 | 0.7595 | 0.8715 |
| 0.4085 | 4.2951 | 524 | 0.7958 | 0.2080 | 0.7958 | 0.8921 |
| 0.4085 | 4.3115 | 526 | 0.9110 | 0.0435 | 0.9110 | 0.9544 |
| 0.4085 | 4.3279 | 528 | 0.9587 | 0.0435 | 0.9587 | 0.9791 |
| 0.4085 | 4.3443 | 530 | 0.8878 | 0.0435 | 0.8878 | 0.9422 |
| 0.4085 | 4.3607 | 532 | 0.8555 | 0.2080 | 0.8555 | 0.9249 |
| 0.4085 | 4.3770 | 534 | 0.8370 | 0.0 | 0.8370 | 0.9149 |
| 0.4085 | 4.3934 | 536 | 0.8567 | 0.0179 | 0.8567 | 0.9256 |
| 0.4085 | 4.4098 | 538 | 0.8953 | 0.0179 | 0.8953 | 0.9462 |
| 0.4085 | 4.4262 | 540 | 1.0401 | 0.0530 | 1.0401 | 1.0199 |
| 0.4085 | 4.4426 | 542 | 1.4669 | 0.1698 | 1.4669 | 1.2112 |
| 0.4085 | 4.4590 | 544 | 1.7715 | -0.0267 | 1.7715 | 1.3310 |
| 0.4085 | 4.4754 | 546 | 1.7215 | -0.0267 | 1.7215 | 1.3121 |
| 0.4085 | 4.4918 | 548 | 1.4307 | 0.1921 | 1.4307 | 1.1961 |
| 0.4085 | 4.5082 | 550 | 1.0660 | 0.1951 | 1.0660 | 1.0325 |
| 0.4085 | 4.5246 | 552 | 0.8495 | 0.2029 | 0.8495 | 0.9217 |
| 0.4085 | 4.5410 | 554 | 0.8066 | 0.0179 | 0.8066 | 0.8981 |
| 0.4085 | 4.5574 | 556 | 0.8130 | 0.0179 | 0.8130 | 0.9016 |
| 0.4085 | 4.5738 | 558 | 0.8517 | 0.2029 | 0.8517 | 0.9229 |
| 0.4085 | 4.5902 | 560 | 0.8953 | 0.0530 | 0.8953 | 0.9462 |
| 0.4085 | 4.6066 | 562 | 0.9216 | 0.0530 | 0.9216 | 0.9600 |
| 0.4085 | 4.6230 | 564 | 0.9552 | 0.1951 | 0.9552 | 0.9773 |
| 0.4085 | 4.6393 | 566 | 0.9637 | 0.1951 | 0.9637 | 0.9817 |
| 0.4085 | 4.6557 | 568 | 0.9262 | 0.1951 | 0.9262 | 0.9624 |
| 0.4085 | 4.6721 | 570 | 0.8766 | 0.0435 | 0.8766 | 0.9363 |
| 0.4085 | 4.6885 | 572 | 0.8646 | 0.0 | 0.8646 | 0.9298 |
| 0.4085 | 4.7049 | 574 | 0.8696 | 0.1333 | 0.8696 | 0.9325 |
| 0.4085 | 4.7213 | 576 | 0.8785 | 0.1538 | 0.8785 | 0.9373 |
| 0.4085 | 4.7377 | 578 | 0.8859 | 0.1538 | 0.8859 | 0.9412 |
| 0.4085 | 4.7541 | 580 | 0.8780 | 0.1538 | 0.8780 | 0.9370 |
| 0.4085 | 4.7705 | 582 | 0.8874 | 0.1951 | 0.8874 | 0.9420 |
| 0.4085 | 4.7869 | 584 | 0.8975 | 0.1951 | 0.8975 | 0.9474 |
| 0.4085 | 4.8033 | 586 | 0.8995 | 0.1951 | 0.8995 | 0.9484 |
| 0.4085 | 4.8197 | 588 | 0.8803 | 0.0435 | 0.8803 | 0.9382 |
| 0.4085 | 4.8361 | 590 | 0.8717 | 0.0435 | 0.8717 | 0.9336 |
| 0.4085 | 4.8525 | 592 | 0.8612 | 0.1538 | 0.8612 | 0.9280 |
| 0.4085 | 4.8689 | 594 | 0.8807 | -0.0132 | 0.8807 | 0.9385 |
| 0.4085 | 4.8852 | 596 | 0.8899 | -0.0132 | 0.8899 | 0.9434 |
| 0.4085 | 4.9016 | 598 | 0.8796 | -0.0132 | 0.8796 | 0.9379 |
| 0.4085 | 4.9180 | 600 | 0.8776 | 0.0272 | 0.8776 | 0.9368 |
| 0.4085 | 4.9344 | 602 | 0.8910 | 0.0435 | 0.8910 | 0.9439 |
| 0.4085 | 4.9508 | 604 | 0.8626 | 0.0435 | 0.8626 | 0.9288 |
| 0.4085 | 4.9672 | 606 | 0.8550 | 0.0435 | 0.8550 | 0.9247 |
| 0.4085 | 4.9836 | 608 | 0.8699 | 0.0435 | 0.8699 | 0.9327 |
| 0.4085 | 5.0 | 610 | 0.8995 | 0.0435 | 0.8995 | 0.9484 |
| 0.4085 | 5.0164 | 612 | 0.9674 | 0.1951 | 0.9674 | 0.9836 |
| 0.4085 | 5.0328 | 614 | 1.0118 | 0.1921 | 1.0118 | 1.0059 |
| 0.4085 | 5.0492 | 616 | 0.9789 | 0.0530 | 0.9789 | 0.9894 |
| 0.4085 | 5.0656 | 618 | 0.9419 | -0.1440 | 0.9419 | 0.9705 |
| 0.4085 | 5.0820 | 620 | 0.9492 | -0.1440 | 0.9492 | 0.9743 |
| 0.4085 | 5.0984 | 622 | 0.9810 | 0.0435 | 0.9810 | 0.9905 |
| 0.4085 | 5.1148 | 624 | 0.9970 | 0.0530 | 0.9970 | 0.9985 |
| 0.4085 | 5.1311 | 626 | 1.0201 | 0.0530 | 1.0201 | 1.0100 |
| 0.4085 | 5.1475 | 628 | 1.0019 | 0.0530 | 1.0019 | 1.0009 |
| 0.4085 | 5.1639 | 630 | 0.9850 | 0.0435 | 0.9850 | 0.9925 |
| 0.4085 | 5.1803 | 632 | 0.9519 | 0.0435 | 0.9519 | 0.9757 |
| 0.4085 | 5.1967 | 634 | 0.9259 | -0.1440 | 0.9259 | 0.9622 |
| 0.4085 | 5.2131 | 636 | 0.9173 | 0.0435 | 0.9173 | 0.9578 |
| 0.4085 | 5.2295 | 638 | 0.9439 | 0.0435 | 0.9439 | 0.9715 |
| 0.4085 | 5.2459 | 640 | 1.0042 | 0.1921 | 1.0042 | 1.0021 |
| 0.4085 | 5.2623 | 642 | 0.9838 | 0.1951 | 0.9838 | 0.9919 |
| 0.4085 | 5.2787 | 644 | 0.9214 | 0.0435 | 0.9214 | 0.9599 |
| 0.4085 | 5.2951 | 646 | 0.8975 | 0.0435 | 0.8975 | 0.9474 |
| 0.4085 | 5.3115 | 648 | 0.8918 | 0.0435 | 0.8918 | 0.9444 |
| 0.4085 | 5.3279 | 650 | 0.9009 | 0.0435 | 0.9009 | 0.9492 |
| 0.4085 | 5.3443 | 652 | 0.8987 | 0.0435 | 0.8987 | 0.9480 |
| 0.4085 | 5.3607 | 654 | 0.8814 | 0.0435 | 0.8814 | 0.9388 |
| 0.4085 | 5.3770 | 656 | 0.8679 | 0.0435 | 0.8679 | 0.9316 |
| 0.4085 | 5.3934 | 658 | 0.8692 | 0.0435 | 0.8692 | 0.9323 |
| 0.4085 | 5.4098 | 660 | 0.8634 | 0.0435 | 0.8634 | 0.9292 |
| 0.4085 | 5.4262 | 662 | 0.8581 | 0.0435 | 0.8581 | 0.9263 |
| 0.4085 | 5.4426 | 664 | 0.8508 | 0.0435 | 0.8508 | 0.9224 |
| 0.4085 | 5.4590 | 666 | 0.8672 | 0.0435 | 0.8672 | 0.9312 |
| 0.4085 | 5.4754 | 668 | 0.8734 | 0.0435 | 0.8734 | 0.9346 |
| 0.4085 | 5.4918 | 670 | 0.8641 | 0.0435 | 0.8641 | 0.9296 |
| 0.4085 | 5.5082 | 672 | 0.8850 | 0.0435 | 0.8850 | 0.9407 |
| 0.4085 | 5.5246 | 674 | 0.9143 | 0.1987 | 0.9143 | 0.9562 |
| 0.4085 | 5.5410 | 676 | 0.9154 | 0.1987 | 0.9154 | 0.9567 |
| 0.4085 | 5.5574 | 678 | 0.8992 | 0.0435 | 0.8992 | 0.9482 |
| 0.4085 | 5.5738 | 680 | 0.8977 | 0.0435 | 0.8977 | 0.9475 |
| 0.4085 | 5.5902 | 682 | 0.9078 | 0.0435 | 0.9078 | 0.9528 |
| 0.4085 | 5.6066 | 684 | 0.9409 | 0.0530 | 0.9409 | 0.9700 |
| 0.4085 | 5.6230 | 686 | 0.9442 | 0.0530 | 0.9442 | 0.9717 |
| 0.4085 | 5.6393 | 688 | 0.9749 | 0.0530 | 0.9749 | 0.9874 |
| 0.4085 | 5.6557 | 690 | 1.0003 | 0.0530 | 1.0003 | 1.0002 |
| 0.4085 | 5.6721 | 692 | 1.0145 | 0.0530 | 1.0145 | 1.0072 |
| 0.4085 | 5.6885 | 694 | 1.0182 | 0.0530 | 1.0182 | 1.0090 |
| 0.4085 | 5.7049 | 696 | 0.9909 | 0.0530 | 0.9909 | 0.9954 |
| 0.4085 | 5.7213 | 698 | 0.9855 | -0.1493 | 0.9855 | 0.9927 |
| 0.4085 | 5.7377 | 700 | 0.9863 | -0.1493 | 0.9863 | 0.9931 |
| 0.4085 | 5.7541 | 702 | 0.9794 | -0.1493 | 0.9794 | 0.9897 |
| 0.4085 | 5.7705 | 704 | 0.9778 | -0.1159 | 0.9778 | 0.9888 |
| 0.4085 | 5.7869 | 706 | 0.9800 | -0.1159 | 0.9800 | 0.9900 |
| 0.4085 | 5.8033 | 708 | 0.9719 | -0.1440 | 0.9719 | 0.9859 |
| 0.4085 | 5.8197 | 710 | 0.9775 | 0.0435 | 0.9775 | 0.9887 |
| 0.4085 | 5.8361 | 712 | 0.9636 | 0.0435 | 0.9636 | 0.9816 |
| 0.4085 | 5.8525 | 714 | 0.9811 | 0.0530 | 0.9811 | 0.9905 |
| 0.4085 | 5.8689 | 716 | 0.9952 | 0.1951 | 0.9952 | 0.9976 |
| 0.4085 | 5.8852 | 718 | 1.0106 | 0.1951 | 1.0106 | 1.0053 |
| 0.4085 | 5.9016 | 720 | 0.9695 | 0.0530 | 0.9695 | 0.9846 |
| 0.4085 | 5.9180 | 722 | 0.9209 | 0.0435 | 0.9209 | 0.9596 |
| 0.4085 | 5.9344 | 724 | 0.8970 | 0.0435 | 0.8970 | 0.9471 |
| 0.4085 | 5.9508 | 726 | 0.8860 | 0.0435 | 0.8860 | 0.9413 |
| 0.4085 | 5.9672 | 728 | 0.8826 | 0.0435 | 0.8826 | 0.9395 |
| 0.4085 | 5.9836 | 730 | 0.8875 | 0.0435 | 0.8875 | 0.9421 |
| 0.4085 | 6.0 | 732 | 0.8914 | 0.0435 | 0.8914 | 0.9442 |
| 0.4085 | 6.0164 | 734 | 0.8975 | 0.0435 | 0.8975 | 0.9474 |
| 0.4085 | 6.0328 | 736 | 0.9029 | 0.0435 | 0.9029 | 0.9502 |
| 0.4085 | 6.0492 | 738 | 0.8989 | -0.1440 | 0.8989 | 0.9481 |
| 0.4085 | 6.0656 | 740 | 0.8890 | -0.1440 | 0.8890 | 0.9429 |
| 0.4085 | 6.0820 | 742 | 0.8807 | -0.1440 | 0.8807 | 0.9384 |
| 0.4085 | 6.0984 | 744 | 0.8858 | 0.0435 | 0.8858 | 0.9412 |
| 0.4085 | 6.1148 | 746 | 0.8996 | 0.0435 | 0.8996 | 0.9484 |
| 0.4085 | 6.1311 | 748 | 0.9321 | 0.0530 | 0.9321 | 0.9655 |
| 0.4085 | 6.1475 | 750 | 0.9640 | 0.0530 | 0.9640 | 0.9819 |
| 0.4085 | 6.1639 | 752 | 1.0142 | 0.1921 | 1.0142 | 1.0071 |
| 0.4085 | 6.1803 | 754 | 0.9893 | 0.0530 | 0.9893 | 0.9947 |
| 0.4085 | 6.1967 | 756 | 0.9259 | 0.0435 | 0.9259 | 0.9622 |
| 0.4085 | 6.2131 | 758 | 0.9269 | -0.1440 | 0.9269 | 0.9628 |
| 0.4085 | 6.2295 | 760 | 0.9581 | -0.1493 | 0.9581 | 0.9788 |
| 0.4085 | 6.2459 | 762 | 0.9708 | -0.1538 | 0.9708 | 0.9853 |
| 0.4085 | 6.2623 | 764 | 0.9667 | -0.1493 | 0.9667 | 0.9832 |
| 0.4085 | 6.2787 | 766 | 0.9420 | -0.1493 | 0.9420 | 0.9706 |
| 0.4085 | 6.2951 | 768 | 0.9694 | 0.0530 | 0.9694 | 0.9846 |
| 0.4085 | 6.3115 | 770 | 1.0320 | 0.0610 | 1.0320 | 1.0159 |
| 0.4085 | 6.3279 | 772 | 1.0404 | 0.0610 | 1.0404 | 1.0200 |
| 0.4085 | 6.3443 | 774 | 1.0241 | 0.0610 | 1.0241 | 1.0120 |
| 0.4085 | 6.3607 | 776 | 0.9695 | 0.0530 | 0.9695 | 0.9846 |
| 0.4085 | 6.3770 | 778 | 0.9369 | 0.0435 | 0.9369 | 0.9679 |
| 0.4085 | 6.3934 | 780 | 0.9347 | -0.1440 | 0.9347 | 0.9668 |
| 0.4085 | 6.4098 | 782 | 0.9399 | -0.1493 | 0.9399 | 0.9695 |
| 0.4085 | 6.4262 | 784 | 0.9464 | -0.1493 | 0.9464 | 0.9729 |
| 0.4085 | 6.4426 | 786 | 0.9550 | -0.1493 | 0.9550 | 0.9772 |
| 0.4085 | 6.4590 | 788 | 0.9702 | -0.1493 | 0.9702 | 0.9850 |
| 0.4085 | 6.4754 | 790 | 0.9836 | -0.1493 | 0.9836 | 0.9918 |
| 0.4085 | 6.4918 | 792 | 0.9990 | 0.0530 | 0.9990 | 0.9995 |
| 0.4085 | 6.5082 | 794 | 1.0390 | 0.0610 | 1.0390 | 1.0193 |
| 0.4085 | 6.5246 | 796 | 1.0578 | 0.0610 | 1.0578 | 1.0285 |
| 0.4085 | 6.5410 | 798 | 1.0194 | 0.0610 | 1.0194 | 1.0097 |
| 0.4085 | 6.5574 | 800 | 0.9745 | 0.0530 | 0.9745 | 0.9872 |
| 0.4085 | 6.5738 | 802 | 0.9282 | 0.0530 | 0.9282 | 0.9634 |
| 0.4085 | 6.5902 | 804 | 0.8936 | 0.0435 | 0.8936 | 0.9453 |
| 0.4085 | 6.6066 | 806 | 0.8771 | -0.1440 | 0.8771 | 0.9365 |
| 0.4085 | 6.6230 | 808 | 0.8798 | -0.1493 | 0.8798 | 0.9380 |
| 0.4085 | 6.6393 | 810 | 0.8841 | -0.1493 | 0.8841 | 0.9403 |
| 0.4085 | 6.6557 | 812 | 0.8804 | -0.1493 | 0.8804 | 0.9383 |
| 0.4085 | 6.6721 | 814 | 0.8779 | -0.1440 | 0.8779 | 0.9370 |
| 0.4085 | 6.6885 | 816 | 0.8847 | 0.0435 | 0.8847 | 0.9406 |
| 0.4085 | 6.7049 | 818 | 0.9026 | 0.0435 | 0.9026 | 0.9500 |
| 0.4085 | 6.7213 | 820 | 0.9171 | 0.0530 | 0.9171 | 0.9577 |
| 0.4085 | 6.7377 | 822 | 0.9194 | 0.0435 | 0.9194 | 0.9589 |
| 0.4085 | 6.7541 | 824 | 0.9247 | -0.1440 | 0.9247 | 0.9616 |
| 0.4085 | 6.7705 | 826 | 0.9444 | -0.1493 | 0.9444 | 0.9718 |
| 0.4085 | 6.7869 | 828 | 0.9583 | -0.1493 | 0.9583 | 0.9789 |
| 0.4085 | 6.8033 | 830 | 0.9680 | -0.1493 | 0.9680 | 0.9839 |
| 0.4085 | 6.8197 | 832 | 0.9863 | -0.1493 | 0.9863 | 0.9931 |
| 0.4085 | 6.8361 | 834 | 1.0032 | -0.1493 | 1.0032 | 1.0016 |
| 0.4085 | 6.8525 | 836 | 1.0136 | -0.1493 | 1.0136 | 1.0068 |
| 0.4085 | 6.8689 | 838 | 1.0295 | -0.1440 | 1.0295 | 1.0146 |
| 0.4085 | 6.8852 | 840 | 1.0531 | 0.0530 | 1.0531 | 1.0262 |
| 0.4085 | 6.9016 | 842 | 1.0909 | 0.0530 | 1.0909 | 1.0445 |
| 0.4085 | 6.9180 | 844 | 1.0820 | 0.0530 | 1.0820 | 1.0402 |
| 0.4085 | 6.9344 | 846 | 1.0552 | 0.0530 | 1.0552 | 1.0272 |
| 0.4085 | 6.9508 | 848 | 1.0479 | -0.1493 | 1.0479 | 1.0236 |
| 0.4085 | 6.9672 | 850 | 1.0358 | -0.1440 | 1.0358 | 1.0177 |
| 0.4085 | 6.9836 | 852 | 1.0168 | -0.1440 | 1.0168 | 1.0084 |
| 0.4085 | 7.0 | 854 | 1.0108 | 0.0530 | 1.0108 | 1.0054 |
| 0.4085 | 7.0164 | 856 | 1.0207 | 0.0530 | 1.0207 | 1.0103 |
| 0.4085 | 7.0328 | 858 | 1.0206 | 0.0530 | 1.0206 | 1.0102 |
| 0.4085 | 7.0492 | 860 | 1.0042 | 0.0530 | 1.0042 | 1.0021 |
| 0.4085 | 7.0656 | 862 | 0.9863 | 0.0530 | 0.9863 | 0.9931 |
| 0.4085 | 7.0820 | 864 | 0.9787 | -0.1440 | 0.9787 | 0.9893 |
| 0.4085 | 7.0984 | 866 | 0.9789 | -0.1440 | 0.9789 | 0.9894 |
| 0.4085 | 7.1148 | 868 | 0.9777 | 0.0530 | 0.9777 | 0.9888 |
| 0.4085 | 7.1311 | 870 | 0.9950 | 0.0530 | 0.9950 | 0.9975 |
| 0.4085 | 7.1475 | 872 | 1.0207 | 0.0530 | 1.0207 | 1.0103 |
| 0.4085 | 7.1639 | 874 | 1.0540 | 0.1921 | 1.0540 | 1.0267 |
| 0.4085 | 7.1803 | 876 | 1.0452 | 0.0610 | 1.0452 | 1.0224 |
| 0.4085 | 7.1967 | 878 | 0.9971 | 0.0530 | 0.9971 | 0.9985 |
| 0.4085 | 7.2131 | 880 | 0.9699 | 0.0530 | 0.9699 | 0.9848 |
| 0.4085 | 7.2295 | 882 | 0.9535 | 0.0435 | 0.9535 | 0.9765 |
| 0.4085 | 7.2459 | 884 | 0.9563 | -0.1440 | 0.9563 | 0.9779 |
| 0.4085 | 7.2623 | 886 | 0.9582 | -0.1440 | 0.9582 | 0.9789 |
| 0.4085 | 7.2787 | 888 | 0.9501 | -0.1440 | 0.9501 | 0.9747 |
| 0.4085 | 7.2951 | 890 | 0.9466 | 0.0435 | 0.9466 | 0.9729 |
| 0.4085 | 7.3115 | 892 | 0.9485 | 0.0435 | 0.9485 | 0.9739 |
| 0.4085 | 7.3279 | 894 | 0.9428 | 0.0435 | 0.9428 | 0.9710 |
| 0.4085 | 7.3443 | 896 | 0.9339 | 0.0435 | 0.9339 | 0.9664 |
| 0.4085 | 7.3607 | 898 | 0.9297 | 0.0435 | 0.9297 | 0.9642 |
| 0.4085 | 7.3770 | 900 | 0.9330 | 0.0435 | 0.9330 | 0.9659 |
| 0.4085 | 7.3934 | 902 | 0.9427 | 0.0435 | 0.9427 | 0.9709 |
| 0.4085 | 7.4098 | 904 | 0.9558 | 0.0435 | 0.9558 | 0.9776 |
| 0.4085 | 7.4262 | 906 | 0.9646 | 0.0435 | 0.9646 | 0.9821 |
| 0.4085 | 7.4426 | 908 | 0.9735 | 0.0435 | 0.9735 | 0.9867 |
| 0.4085 | 7.4590 | 910 | 0.9760 | 0.0435 | 0.9760 | 0.9879 |
| 0.4085 | 7.4754 | 912 | 0.9828 | 0.0435 | 0.9828 | 0.9914 |
| 0.4085 | 7.4918 | 914 | 0.9917 | -0.1493 | 0.9917 | 0.9959 |
| 0.4085 | 7.5082 | 916 | 0.9969 | -0.1493 | 0.9969 | 0.9985 |
| 0.4085 | 7.5246 | 918 | 1.0044 | -0.1493 | 1.0044 | 1.0022 |
| 0.4085 | 7.5410 | 920 | 1.0112 | 0.0375 | 1.0112 | 1.0056 |
| 0.4085 | 7.5574 | 922 | 1.0097 | 0.0530 | 1.0097 | 1.0048 |
| 0.4085 | 7.5738 | 924 | 1.0085 | -0.1440 | 1.0085 | 1.0042 |
| 0.4085 | 7.5902 | 926 | 1.0162 | -0.1440 | 1.0162 | 1.0081 |
| 0.4085 | 7.6066 | 928 | 1.0246 | -0.1440 | 1.0246 | 1.0122 |
| 0.4085 | 7.6230 | 930 | 1.0379 | -0.1493 | 1.0379 | 1.0188 |
| 0.4085 | 7.6393 | 932 | 1.0379 | -0.1493 | 1.0379 | 1.0188 |
| 0.4085 | 7.6557 | 934 | 1.0322 | -0.1440 | 1.0322 | 1.0160 |
| 0.4085 | 7.6721 | 936 | 1.0203 | -0.1440 | 1.0203 | 1.0101 |
| 0.4085 | 7.6885 | 938 | 0.9969 | -0.1440 | 0.9969 | 0.9984 |
| 0.4085 | 7.7049 | 940 | 0.9932 | 0.0530 | 0.9932 | 0.9966 |
| 0.4085 | 7.7213 | 942 | 1.0214 | 0.0530 | 1.0214 | 1.0106 |
| 0.4085 | 7.7377 | 944 | 1.0434 | 0.0530 | 1.0434 | 1.0215 |
| 0.4085 | 7.7541 | 946 | 1.0318 | 0.0530 | 1.0318 | 1.0158 |
| 0.4085 | 7.7705 | 948 | 0.9938 | 0.0530 | 0.9938 | 0.9969 |
| 0.4085 | 7.7869 | 950 | 0.9558 | 0.0530 | 0.9558 | 0.9777 |
| 0.4085 | 7.8033 | 952 | 0.9364 | 0.0530 | 0.9364 | 0.9677 |
| 0.4085 | 7.8197 | 954 | 0.9207 | 0.0530 | 0.9207 | 0.9595 |
| 0.4085 | 7.8361 | 956 | 0.9100 | 0.0435 | 0.9100 | 0.9539 |
| 0.4085 | 7.8525 | 958 | 0.9154 | -0.1440 | 0.9154 | 0.9567 |
| 0.4085 | 7.8689 | 960 | 0.9226 | -0.1440 | 0.9226 | 0.9605 |
| 0.4085 | 7.8852 | 962 | 0.9239 | -0.1440 | 0.9239 | 0.9612 |
| 0.4085 | 7.9016 | 964 | 0.9240 | -0.1440 | 0.9240 | 0.9613 |
| 0.4085 | 7.9180 | 966 | 0.9288 | -0.1440 | 0.9288 | 0.9638 |
| 0.4085 | 7.9344 | 968 | 0.9318 | -0.1440 | 0.9318 | 0.9653 |
| 0.4085 | 7.9508 | 970 | 0.9319 | -0.1440 | 0.9319 | 0.9653 |
| 0.4085 | 7.9672 | 972 | 0.9318 | -0.1440 | 0.9318 | 0.9653 |
| 0.4085 | 7.9836 | 974 | 0.9365 | 0.0435 | 0.9365 | 0.9677 |
| 0.4085 | 8.0 | 976 | 0.9536 | 0.0530 | 0.9536 | 0.9765 |
| 0.4085 | 8.0164 | 978 | 0.9838 | 0.0530 | 0.9838 | 0.9919 |
| 0.4085 | 8.0328 | 980 | 1.0255 | 0.0610 | 1.0255 | 1.0127 |
| 0.4085 | 8.0492 | 982 | 1.0444 | 0.0610 | 1.0444 | 1.0220 |
| 0.4085 | 8.0656 | 984 | 1.0258 | 0.0610 | 1.0258 | 1.0128 |
| 0.4085 | 8.0820 | 986 | 0.9905 | 0.0530 | 0.9905 | 0.9953 |
| 0.4085 | 8.0984 | 988 | 0.9720 | 0.0530 | 0.9720 | 0.9859 |
| 0.4085 | 8.1148 | 990 | 0.9662 | 0.0435 | 0.9662 | 0.9829 |
| 0.4085 | 8.1311 | 992 | 0.9622 | 0.0435 | 0.9622 | 0.9809 |
| 0.4085 | 8.1475 | 994 | 0.9635 | 0.0435 | 0.9635 | 0.9816 |
| 0.4085 | 8.1639 | 996 | 0.9653 | 0.0435 | 0.9653 | 0.9825 |
| 0.4085 | 8.1803 | 998 | 0.9673 | 0.0435 | 0.9673 | 0.9835 |
| 0.094 | 8.1967 | 1000 | 0.9744 | 0.0530 | 0.9744 | 0.9871 |
| 0.094 | 8.2131 | 1002 | 0.9894 | 0.0530 | 0.9894 | 0.9947 |
| 0.094 | 8.2295 | 1004 | 0.9882 | 0.0530 | 0.9882 | 0.9941 |
| 0.094 | 8.2459 | 1006 | 0.9836 | 0.0530 | 0.9836 | 0.9918 |
| 0.094 | 8.2623 | 1008 | 0.9763 | 0.0530 | 0.9763 | 0.9881 |
| 0.094 | 8.2787 | 1010 | 0.9707 | 0.0435 | 0.9707 | 0.9852 |
| 0.094 | 8.2951 | 1012 | 0.9643 | 0.0435 | 0.9643 | 0.9820 |
| 0.094 | 8.3115 | 1014 | 0.9582 | 0.0435 | 0.9582 | 0.9789 |
| 0.094 | 8.3279 | 1016 | 0.9591 | -0.1440 | 0.9591 | 0.9793 |
| 0.094 | 8.3443 | 1018 | 0.9625 | -0.1440 | 0.9625 | 0.9811 |
| 0.094 | 8.3607 | 1020 | 0.9615 | 0.0435 | 0.9615 | 0.9806 |
| 0.094 | 8.3770 | 1022 | 0.9574 | 0.0435 | 0.9574 | 0.9785 |
| 0.094 | 8.3934 | 1024 | 0.9518 | 0.0435 | 0.9518 | 0.9756 |
| 0.094 | 8.4098 | 1026 | 0.9492 | 0.0435 | 0.9492 | 0.9742 |
| 0.094 | 8.4262 | 1028 | 0.9498 | -0.1440 | 0.9498 | 0.9746 |
| 0.094 | 8.4426 | 1030 | 0.9476 | -0.1440 | 0.9476 | 0.9735 |
| 0.094 | 8.4590 | 1032 | 0.9460 | 0.0435 | 0.9460 | 0.9726 |
| 0.094 | 8.4754 | 1034 | 0.9475 | 0.0435 | 0.9475 | 0.9734 |
| 0.094 | 8.4918 | 1036 | 0.9483 | 0.0435 | 0.9483 | 0.9738 |
| 0.094 | 8.5082 | 1038 | 0.9474 | 0.0435 | 0.9474 | 0.9734 |
| 0.094 | 8.5246 | 1040 | 0.9505 | 0.0435 | 0.9505 | 0.9749 |
| 0.094 | 8.5410 | 1042 | 0.9532 | 0.0435 | 0.9532 | 0.9763 |
| 0.094 | 8.5574 | 1044 | 0.9590 | 0.0435 | 0.9590 | 0.9793 |
| 0.094 | 8.5738 | 1046 | 0.9624 | 0.0435 | 0.9624 | 0.9810 |
| 0.094 | 8.5902 | 1048 | 0.9682 | 0.0530 | 0.9682 | 0.9839 |
| 0.094 | 8.6066 | 1050 | 0.9802 | 0.0530 | 0.9802 | 0.9900 |
| 0.094 | 8.6230 | 1052 | 0.9878 | 0.0530 | 0.9878 | 0.9939 |
| 0.094 | 8.6393 | 1054 | 0.9849 | 0.0435 | 0.9849 | 0.9924 |
| 0.094 | 8.6557 | 1056 | 0.9831 | 0.0435 | 0.9831 | 0.9915 |
| 0.094 | 8.6721 | 1058 | 0.9860 | -0.1493 | 0.9860 | 0.9930 |
| 0.094 | 8.6885 | 1060 | 0.9889 | -0.1493 | 0.9889 | 0.9944 |
| 0.094 | 8.7049 | 1062 | 0.9902 | -0.1493 | 0.9902 | 0.9951 |
| 0.094 | 8.7213 | 1064 | 0.9864 | -0.1440 | 0.9864 | 0.9932 |
| 0.094 | 8.7377 | 1066 | 0.9797 | -0.1440 | 0.9797 | 0.9898 |
| 0.094 | 8.7541 | 1068 | 0.9782 | 0.0435 | 0.9782 | 0.9890 |
| 0.094 | 8.7705 | 1070 | 0.9815 | 0.0435 | 0.9815 | 0.9907 |
| 0.094 | 8.7869 | 1072 | 0.9824 | 0.0530 | 0.9824 | 0.9911 |
| 0.094 | 8.8033 | 1074 | 0.9814 | 0.0530 | 0.9814 | 0.9907 |
| 0.094 | 8.8197 | 1076 | 0.9808 | 0.0530 | 0.9808 | 0.9903 |
| 0.094 | 8.8361 | 1078 | 0.9799 | 0.0530 | 0.9799 | 0.9899 |
| 0.094 | 8.8525 | 1080 | 0.9814 | 0.0530 | 0.9814 | 0.9906 |
| 0.094 | 8.8689 | 1082 | 0.9871 | 0.0530 | 0.9871 | 0.9935 |
| 0.094 | 8.8852 | 1084 | 0.9878 | 0.0530 | 0.9878 | 0.9939 |
| 0.094 | 8.9016 | 1086 | 0.9804 | 0.0530 | 0.9804 | 0.9902 |
| 0.094 | 8.9180 | 1088 | 0.9744 | 0.0530 | 0.9744 | 0.9871 |
| 0.094 | 8.9344 | 1090 | 0.9739 | 0.0530 | 0.9739 | 0.9869 |
| 0.094 | 8.9508 | 1092 | 0.9769 | 0.0530 | 0.9769 | 0.9884 |
| 0.094 | 8.9672 | 1094 | 0.9809 | 0.0530 | 0.9809 | 0.9904 |
| 0.094 | 8.9836 | 1096 | 0.9817 | 0.0530 | 0.9817 | 0.9908 |
| 0.094 | 9.0 | 1098 | 0.9780 | 0.0530 | 0.9780 | 0.9889 |
| 0.094 | 9.0164 | 1100 | 0.9785 | 0.0530 | 0.9785 | 0.9892 |
| 0.094 | 9.0328 | 1102 | 0.9779 | 0.0530 | 0.9779 | 0.9889 |
| 0.094 | 9.0492 | 1104 | 0.9807 | 0.0530 | 0.9807 | 0.9903 |
| 0.094 | 9.0656 | 1106 | 0.9861 | 0.0530 | 0.9861 | 0.9930 |
| 0.094 | 9.0820 | 1108 | 0.9875 | 0.0530 | 0.9875 | 0.9937 |
| 0.094 | 9.0984 | 1110 | 0.9884 | 0.0530 | 0.9884 | 0.9942 |
| 0.094 | 9.1148 | 1112 | 0.9902 | 0.0530 | 0.9902 | 0.9951 |
| 0.094 | 9.1311 | 1114 | 0.9922 | 0.0530 | 0.9922 | 0.9961 |
| 0.094 | 9.1475 | 1116 | 0.9944 | 0.0530 | 0.9944 | 0.9972 |
| 0.094 | 9.1639 | 1118 | 0.9964 | 0.0530 | 0.9964 | 0.9982 |
| 0.094 | 9.1803 | 1120 | 0.9971 | 0.0530 | 0.9971 | 0.9986 |
| 0.094 | 9.1967 | 1122 | 0.9985 | 0.0530 | 0.9985 | 0.9993 |
| 0.094 | 9.2131 | 1124 | 1.0006 | 0.0530 | 1.0006 | 1.0003 |
| 0.094 | 9.2295 | 1126 | 1.0003 | 0.0530 | 1.0003 | 1.0001 |
| 0.094 | 9.2459 | 1128 | 1.0009 | 0.0530 | 1.0009 | 1.0004 |
| 0.094 | 9.2623 | 1130 | 0.9982 | 0.0530 | 0.9982 | 0.9991 |
| 0.094 | 9.2787 | 1132 | 0.9945 | 0.0530 | 0.9945 | 0.9972 |
| 0.094 | 9.2951 | 1134 | 0.9899 | 0.0530 | 0.9899 | 0.9949 |
| 0.094 | 9.3115 | 1136 | 0.9849 | 0.0530 | 0.9849 | 0.9924 |
| 0.094 | 9.3279 | 1138 | 0.9841 | 0.0530 | 0.9841 | 0.9920 |
| 0.094 | 9.3443 | 1140 | 0.9858 | 0.0530 | 0.9858 | 0.9929 |
| 0.094 | 9.3607 | 1142 | 0.9891 | 0.0530 | 0.9891 | 0.9945 |
| 0.094 | 9.3770 | 1144 | 0.9920 | 0.0530 | 0.9920 | 0.9960 |
| 0.094 | 9.3934 | 1146 | 0.9932 | 0.0530 | 0.9932 | 0.9966 |
| 0.094 | 9.4098 | 1148 | 0.9941 | 0.0530 | 0.9941 | 0.9970 |
| 0.094 | 9.4262 | 1150 | 0.9959 | 0.0530 | 0.9959 | 0.9980 |
| 0.094 | 9.4426 | 1152 | 0.9991 | 0.0530 | 0.9991 | 0.9996 |
| 0.094 | 9.4590 | 1154 | 1.0035 | 0.0530 | 1.0035 | 1.0017 |
| 0.094 | 9.4754 | 1156 | 1.0056 | 0.0530 | 1.0056 | 1.0028 |
| 0.094 | 9.4918 | 1158 | 1.0039 | 0.0530 | 1.0039 | 1.0020 |
| 0.094 | 9.5082 | 1160 | 1.0034 | 0.0530 | 1.0034 | 1.0017 |
| 0.094 | 9.5246 | 1162 | 1.0042 | 0.0530 | 1.0042 | 1.0021 |
| 0.094 | 9.5410 | 1164 | 1.0031 | 0.0530 | 1.0031 | 1.0015 |
| 0.094 | 9.5574 | 1166 | 1.0037 | 0.0530 | 1.0037 | 1.0019 |
| 0.094 | 9.5738 | 1168 | 1.0046 | 0.0530 | 1.0046 | 1.0023 |
| 0.094 | 9.5902 | 1170 | 1.0081 | 0.0530 | 1.0081 | 1.0040 |
| 0.094 | 9.6066 | 1172 | 1.0106 | 0.0530 | 1.0106 | 1.0053 |
| 0.094 | 9.6230 | 1174 | 1.0126 | 0.0530 | 1.0126 | 1.0063 |
| 0.094 | 9.6393 | 1176 | 1.0126 | 0.0530 | 1.0126 | 1.0063 |
| 0.094 | 9.6557 | 1178 | 1.0136 | 0.0530 | 1.0136 | 1.0068 |
| 0.094 | 9.6721 | 1180 | 1.0138 | 0.0530 | 1.0138 | 1.0069 |
| 0.094 | 9.6885 | 1182 | 1.0097 | 0.0530 | 1.0097 | 1.0048 |
| 0.094 | 9.7049 | 1184 | 1.0043 | 0.0530 | 1.0043 | 1.0022 |
| 0.094 | 9.7213 | 1186 | 1.0010 | 0.0530 | 1.0010 | 1.0005 |
| 0.094 | 9.7377 | 1188 | 0.9998 | 0.0530 | 0.9998 | 0.9999 |
| 0.094 | 9.7541 | 1190 | 0.9976 | 0.0530 | 0.9976 | 0.9988 |
| 0.094 | 9.7705 | 1192 | 0.9958 | 0.0530 | 0.9958 | 0.9979 |
| 0.094 | 9.7869 | 1194 | 0.9941 | 0.0530 | 0.9941 | 0.9971 |
| 0.094 | 9.8033 | 1196 | 0.9929 | 0.0530 | 0.9929 | 0.9965 |
| 0.094 | 9.8197 | 1198 | 0.9908 | 0.0530 | 0.9908 | 0.9954 |
| 0.094 | 9.8361 | 1200 | 0.9893 | 0.0530 | 0.9893 | 0.9947 |
| 0.094 | 9.8525 | 1202 | 0.9881 | 0.0530 | 0.9881 | 0.9940 |
| 0.094 | 9.8689 | 1204 | 0.9871 | 0.0530 | 0.9871 | 0.9935 |
| 0.094 | 9.8852 | 1206 | 0.9867 | 0.0530 | 0.9867 | 0.9933 |
| 0.094 | 9.9016 | 1208 | 0.9865 | 0.0530 | 0.9865 | 0.9932 |
| 0.094 | 9.9180 | 1210 | 0.9863 | 0.0530 | 0.9863 | 0.9931 |
| 0.094 | 9.9344 | 1212 | 0.9863 | 0.0530 | 0.9863 | 0.9931 |
| 0.094 | 9.9508 | 1214 | 0.9863 | 0.0530 | 0.9863 | 0.9931 |
| 0.094 | 9.9672 | 1216 | 0.9862 | 0.0530 | 0.9862 | 0.9931 |
| 0.094 | 9.9836 | 1218 | 0.9862 | 0.0530 | 0.9862 | 0.9931 |
| 0.094 | 10.0 | 1220 | 0.9861 | 0.0530 | 0.9861 | 0.9930 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF | mradermacher | "2025-03-08T16:52:11Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.1-1M",
"base_model:quantized:marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.1-1M",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-08T15:40:56Z" | ---
base_model: marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.1-1M
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.1-1M
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hush-Qwen2.5-7B-RP-v1.1-1M-i1-GGUF/resolve/main/Hush-Qwen2.5-7B-RP-v1.1-1M.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
team-lucid/hubert-xlarge-korean | team-lucid | "2023-11-06T15:50:49Z" | 15 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"audio",
"automatic-speech-recognition",
"custom_code",
"ko",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-11-04T10:42:36Z" | ---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- audio
---
# hubert-base-korean
## Model Details
HuBERT(Hidden-Unit BERT)는 Facebook에서 제안한 Speech Representation Learning 모델입니다.
HuBERT는 기존의 음성 인식 모델과 달리, 음성 신호를 raw waveform에서 바로 학습하는 self-supervised learning 방식을 사용합니다.
이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다.
### Model Description
<table>
<tr>
<td colspan="2"></td>
<td>Base</td>
<td>Large</td>
</tr>
<tr>
<td rowspan="3">CNN Encoder</td>
<td>strides</td>
<td colspan="2">5, 2, 2, 2, 2, 2, 2</td>
</tr>
<tr>
<td>kernel width</td>
<td colspan="2">10, 3, 3, 3, 3, 2, 2</td>
</tr>
<tr>
<td>channel</td>
<td colspan="2">512</td>
</tr>
<tr>
<td rowspan="4">Transformer Encoder</td>
<td>Layer</td>
<td>12</td>
<td>24</td>
</tr>
<tr>
<td>embedding dim</td>
<td>768</td>
<td>1024</td>
</tr>
<tr>
<td>inner FFN dim</td>
<td>3072</td>
<td>4096</td>
</tr>
<tr>
<td>attention heads</td>
<td>8</td>
<td>16</td>
</tr>
<tr>
<td>Projection</td>
<td>dim</td>
<td>256</td>
<td>768</td>
</tr>
<tr>
<td colspan="2">Params</td>
<td>95M</td>
<td>317M </td>
</tr>
</table>
## How to Get Started with the Model
### Pytorch
```py
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("team-lucid/hubert-xlarge-korean")
wav = torch.ones(1, 16000)
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
### JAX/Flax
```py
import jax.numpy as jnp
from transformers import FlaxAutoModel
model = FlaxAutoModel.from_pretrained("team-lucid/hubert-xlarge-korean", trust_remote_code=True)
wav = jnp.ones((1, 16000))
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
## Training Details
### Training Data
해당 모델은 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아
구축된 [자유대화 음성(일반남여)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [다화자 음성합성 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [방송 콘텐츠 대화체 음성인식 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463)
에서 약 4,000시간을 추출해 학습되었습니다.
### Training Procedure
[원 논문](https://arxiv.org/pdf/2106.07447.pdf)과 동일하게 MFCC 기반으로 Base 모델을 학습한 다음, 500 cluster로 k-means를 수행해 다시 Base와
Large 모델을 학습했습니다.
#### Training Hyperparameters
| Hyperparameter | Base | Large |
|:--------------------|---------|--------:|
| Warmup Steps | 32,000 | 32,000 |
| Learning Rates | 5e-4 | 1.5e-3 |
| Batch Size | 128 | 128 |
| Weight Decay | 0.01 | 0.01 |
| Max Steps | 400,000 | 400,000 |
| Learning Rate Decay | 0.1 | 0.1 |
| \\(Adam\beta_1\\) | 0.9 | 0.9 |
| \\(Adam\beta_2\\) | 0.99 | 0.99 | |
Ui1236/Htc | Ui1236 | "2023-10-03T03:09:34Z" | 0 | 0 | allennlp | [
"allennlp",
"chemistry",
"biology",
"legal",
"summarization",
"ae",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | summarization | "2023-10-03T03:07:55Z" | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- ae
metrics:
- bertscore
library_name: allennlp
pipeline_tag: summarization
tags:
- chemistry
- biology
- legal
--- |
QuantFactory/gpt2-GGUF | QuantFactory | "2024-07-14T17:12:16Z" | 249 | 2 | null | [
"gguf",
"exbert",
"text-generation",
"en",
"base_model:openai-community/gpt2",
"base_model:quantized:openai-community/gpt2",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-04T04:54:24Z" | ---
language: en
tags:
- exbert
license: mit
pipeline_tag: text-generation
base_model: openai-community/gpt2
---
# GPT-2-GGUF
This is quantized version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) created using llama.cpp
# Model Description
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> |
DreamGallery/Qwen-Qwen1.5-0.5B-1718196220 | DreamGallery | "2024-06-12T12:43:41Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-06-12T12:43:40Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-0.5B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
muratsimsek003/roberta-combined-squad-turkish-5epoch | muratsimsek003 | "2025-01-03T11:33:20Z" | 238 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"tr",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | "2025-01-03T08:26:26Z" | ---
library_name: transformers
language:
- tr
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aryan-401/chronos-t5-small-fine-tuned | Aryan-401 | "2024-12-09T13:28:48Z" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-09T07:57:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trenden/bf43477b-ebc3-41aa-b80b-561d40433189 | trenden | "2025-02-02T12:00:35Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | "2025-02-02T11:45:34Z" | ---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf43477b-ebc3-41aa-b80b-561d40433189
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# bf43477b-ebc3-41aa-b80b-561d40433189
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mci29/sn29_dec_15 | mci29 | "2024-11-27T15:03:13Z" | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-27T15:00:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
octnn/q-FrozenLake-v1-4x4-noSlippery | octnn | "2024-01-27T07:20:57Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-27T07:20:54Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="octnn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nghiatrannnnnn/d6d655da-2cdf-44d4-8b16-9141e200c8ae | nghiatrannnnnn | "2025-02-02T12:23:47Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-02T11:21:25Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d6d655da-2cdf-44d4-8b16-9141e200c8ae
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cd7f5ee86cb07e91_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cd7f5ee86cb07e91_train_data.json
type:
field_input: caption_list
field_instruction: s3_key
field_output: default_caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/d6d655da-2cdf-44d4-8b16-9141e200c8ae
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cd7f5ee86cb07e91_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6dc752d6-f1d8-4454-90ec-6a4ce8f61125
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6dc752d6-f1d8-4454-90ec-6a4ce8f61125
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d6d655da-2cdf-44d4-8b16-9141e200c8ae
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0003 | 0.0510 | 200 | 0.0045 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
HyperdustProtocol/ImHyperAGI_llama3 | HyperdustProtocol | "2024-06-06T05:57:43Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-06-06T05:46:08Z" | ---
license: apache-2.0
---
|
dimasik87/10724079-28c3-43ca-8864-5ab4e536816c | dimasik87 | "2025-01-14T11:39:01Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | "2025-01-14T11:28:19Z" | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 10724079-28c3-43ca-8864-5ab4e536816c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6d3f496dc688ac45_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6d3f496dc688ac45_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik87/10724079-28c3-43ca-8864-5ab4e536816c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/6d3f496dc688ac45_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6bbb4b65-663e-464a-acb8-c37d9beb4345
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6bbb4b65-663e-464a-acb8-c37d9beb4345
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 10724079-28c3-43ca-8864-5ab4e536816c
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.4797 |
| 1.3417 | 0.0017 | 5 | 1.4529 |
| 1.3007 | 0.0034 | 10 | 1.2966 |
| 1.3546 | 0.0051 | 15 | 1.2251 |
| 1.1357 | 0.0068 | 20 | 1.1819 |
| 1.0024 | 0.0085 | 25 | 1.1675 |
| 0.9282 | 0.0102 | 30 | 1.1648 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/025ff028-ae8e-40af-9db2-d52dab730fb9 | Best000 | "2025-01-20T05:12:29Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-20T05:12:02Z" | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 025ff028-ae8e-40af-9db2-d52dab730fb9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f17875e23087458d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f17875e23087458d_train_data.json
type:
field_input: wikipedia_passage_concept_A
field_instruction: concept_A
field_output: wikipedia_passage_concept_B
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/025ff028-ae8e-40af-9db2-d52dab730fb9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f17875e23087458d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2384438f-8b33-40fe-bc5c-d1a7d9bc7b11
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2384438f-8b33-40fe-bc5c-d1a7d9bc7b11
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 025ff028-ae8e-40af-9db2-d52dab730fb9
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.8802 | 0.0455 | 1 | nan |
| 1.1929 | 0.1364 | 3 | nan |
| 1.356 | 0.2727 | 6 | nan |
| 3.4218 | 0.4091 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Ahs2000/segformer-b0-scene-parse-150 | Ahs2000 | "2024-11-03T15:24:57Z" | 49 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-11-03T12:24:16Z" | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3049
- Mean Iou: 0.0573
- Mean Accuracy: 0.0859
- Overall Accuracy: 0.4101
- Per Category Iou: [0.030010927318135348, 0.44726327746817224, 0.00125928200111358, 0.9390098229092976, 0.38234383192498567, 0.7785783214702916, 0.0, 0.0, 0.0, 0.0, 0.3425946024166124, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan]
- Per Category Accuracy: [0.06397920795118507, 0.8896496979508158, 0.1742260619150468, 0.972699587340297, 0.5473868702844434, 0.9668470205567394, 0.0, nan, 0.0, 0.0, 0.4206481846498948, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.9227 | 4.0 | 20 | 4.0114 | 0.0393 | 0.0661 | 0.3227 | [0.06495002035888996, 0.3616824052477034, 0.0012751862654151842, 0.9383487415721895, 0.003642086330935252, 0.6238042624952752, 0.0, 0.0, 0.0, 0.0, 0.04837538868243426, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.19314162536793672, 0.7487100796609799, 0.1717062634989201, 0.9751683375280062, 0.0036764320802740043, 0.9665451793252272, 0.0, nan, 0.0, 0.0, 0.04958273876615048, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 3.5704 | 8.0 | 40 | 3.8278 | 0.0440 | 0.0697 | 0.3314 | [0.05867716018346553, 0.3732525545076808, 0.0016563196625038951, 0.940859590195372, 0.06871724092604459, 0.6723288671507391, 0.0, 0.0, 0.0, 0.0, 0.08217889152322527, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.1560358676485134, 0.7765117303839916, 0.24874010079193665, 0.9725133799052144, 0.07300842472042707, 0.9623464326421018, 0.0, nan, 0.0, 0.0, 0.08583421708688917, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 3.4495 | 12.0 | 60 | 3.6593 | 0.0513 | 0.0810 | 0.3724 | [0.04013217032326797, 0.37378386572223904, 0.002132418179570002, 0.9445812374687819, 0.25007496607970453, 0.7221795390214315, 0.0, 0.0, 0.0, 0.0, 0.23447140247510742, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.0822302337400107, 0.8900590972588998, 0.3538516918646508, 0.9678041338050588, 0.29407965253701746, 0.9655045028404612, 0.0, nan, 0.0, 0.0, 0.2529996724096767, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.7922 | 16.0 | 80 | 3.5772 | 0.0562 | 0.0861 | 0.4024 | [0.05052749951447491, 0.4096836982285473, 0.0020946539981145464, 0.9437682003494468, 0.3363278034572279, 0.7582318912588282, 0.0, 0.0, 0.0, 0.0, 0.30894883649841426, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.11003482395314967, 0.9123416702866732, 0.2847372210223182, 0.9733543167088137, 0.44507871335351357, 0.9624996058043618, 0.0, nan, 0.0, 0.0, 0.35915004192045663, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 3.0411 | 20.0 | 100 | 3.4623 | 0.0573 | 0.0884 | 0.4111 | [0.03856540801747222, 0.4218826987563737, 0.0022877904088786706, 0.9429566227457791, 0.3573942676941075, 0.7436237815621519, 0.0, 0.0, 0.0, 0.0, 0.3563730326521024, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.08046345531380782, 0.9365587331747822, 0.29013678905687545, 0.9654825475579796, 0.49064831592876174, 0.9716043987728127, 0.0, nan, 0.0, 0.0, 0.42117010821585427, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.8443 | 24.0 | 120 | 3.5053 | 0.0582 | 0.0870 | 0.3918 | [0.03664342772335264, 0.4283838963956857, 0.0022737335646281494, 0.9405355721043511, 0.31750602659336585, 0.7857576325981176, 0.0, 0.0, 0.0, 0.0, 0.3987358862297607, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.08017012916582819, 0.8485128804522769, 0.3826493880489561, 0.9649809888215473, 0.40650934470129424, 0.9624770803393236, 0.0, nan, 0.0, 0.0, 0.4427466505277536, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.7681 | 28.0 | 140 | 3.4122 | 0.0586 | 0.0878 | 0.4040 | [0.03312422685035546, 0.432634739953462, 0.001900827521034813, 0.9382290337556315, 0.3677854556624332, 0.7851506126960462, 0.0, 0.0, 0.0, 0.0, 0.37191689693992425, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.06676239558782901, 0.8599666855219529, 0.31569474442044637, 0.9650800992305428, 0.5309075166102808, 0.9632474512436309, 0.0, nan, 0.0, 0.0, 0.42372420226203894, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.445 | 32.0 | 160 | 3.3460 | 0.0598 | 0.0888 | 0.4131 | [0.032548051964298, 0.43766520298234657, 0.001747089037591831, 0.9406653786124988, 0.36845328619107537, 0.7891234460485762, 0.0, 0.0, 0.0, 0.0, 0.42112342504840067, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.07099516011855833, 0.8890027845403321, 0.2458603311735061, 0.9738438620623374, 0.49991446098198794, 0.9642926328214046, 0.0, nan, 0.0, 0.0, 0.5302021620961339, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.5309 | 36.0 | 180 | 3.3812 | 0.0577 | 0.0881 | 0.4050 | [0.03263227841514087, 0.42862312848082507, 0.001677291655955992, 0.9421507343303059, 0.3708547174798877, 0.7915861057403827, 0.0, 0.0, 0.0, 0.0, 0.43330380041967825, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.06951147553284741, 0.8180048190361303, 0.2818574514038877, 0.9714201620605354, 0.5192113651678136, 0.962801447035874, 0.0, nan, 0.0, 0.0, 0.5159381020860285, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.4341 | 40.0 | 200 | 3.2711 | 0.0599 | 0.0898 | 0.4169 | [0.0288167612906008, 0.4413618354516986, 0.0018055928611931836, 0.9411833342570688, 0.38248812801419096, 0.7946395385141464, 0.0, 0.0, 0.0, 0.0, 0.4024118202813353, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.059258703430210544, 0.8769676949568881, 0.2613390928725702, 0.9690234921702777, 0.578170442603319, 0.963157349383478, 0.0, nan, 0.0, 0.0, 0.5108243616152979, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.5026 | 44.0 | 220 | 3.3146 | 0.0584 | 0.0870 | 0.4050 | [0.029417411453161204, 0.44352154176953096, 0.0017935972748444507, 0.9412199597905367, 0.3778803290010674, 0.7877165979112559, 0.0, 0.0, 0.0, 0.0, 0.33971275980155, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.060623011095232084, 0.870254796378535, 0.2786177105831533, 0.967356635291715, 0.5463359623488665, 0.966040608908371, 0.0, nan, 0.0, 0.0, 0.3976779953693165, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
| 2.3054 | 48.0 | 240 | 3.3049 | 0.0573 | 0.0859 | 0.4101 | [0.030010927318135348, 0.44726327746817224, 0.00125928200111358, 0.9390098229092976, 0.38234383192498567, 0.7785783214702916, 0.0, 0.0, 0.0, 0.0, 0.3425946024166124, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] | [0.06397920795118507, 0.8896496979508158, 0.1742260619150468, 0.972699587340297, 0.5473868702844434, 0.9668470205567394, 0.0, nan, 0.0, 0.0, 0.4206481846498948, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
PrunaAI/aifeifei798-llama3-8B-DarkIdol-2.3-Uncensored-32K-bnb-8bit-smashed | PrunaAI | "2024-08-17T08:34:50Z" | 7 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K",
"base_model:quantized:aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-08-17T08:30:50Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/aifeifei798-llama3-8B-DarkIdol-2.3-Uncensored-32K-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
fhai50032/BeagleLake-7B-Toxic-GGUF | fhai50032 | "2024-02-10T14:13:13Z" | 2 | 0 | null | [
"gguf",
"base_model:fhai50032/BeagleLake-7B-Toxic",
"base_model:quantized:fhai50032/BeagleLake-7B-Toxic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-02-09T17:59:35Z" | ---
base_model:
- fhai50032/BeagleLake-7B-Toxic
license: apache-2.0
---
Quantized model :-BeagleLake-7B-Toxic
quants:
```Q4_K_M```
```Q5_K_M```
```Q8_0``` |
mingyujeon/code-search-net-tokenizer | mingyujeon | "2025-02-22T22:59:56Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-22T22:59:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
checkiejan/prefix-paraphase-30-20-auto | checkiejan | "2023-09-18T07:28:51Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-18T07:28:49Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
ben-yu/poca-SoccerTwos | ben-yu | "2023-04-08T23:29:10Z" | 33 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-02-19T00:14:31Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: ben-yu/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfr/conditional-utilitarian-deberta-01 | pfr | "2022-10-17T19:09:02Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-09-27T18:52:39Z" | ---
tags:
- deberta-v3
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Conditional Utilitarian Deberta 01
## Model description
This is a [Deberta-based](https://huggingface.co/microsoft/deberta-v3-large) model. It was first fine-tuned on for computing utility estimates of experiences (see [utilitarian-deberta-01](https://huggingface.co/pfr/utilitarian-deberta-01). It was then further fine-tuned on 160 examples of pairwise comparisons of conditional utilities.
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios, under extra contextual information.
## Limitations
The model was fine-tuned on only 160 examples, so it should be expected to have limited performance.
Further, while the base model was trained on ~10000 examples, they are still restricted, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
Given a scenario S under a context C, and the model U, one computes the estimated conditional utility with `U(f'{C} {S}') - U(C)`.
## Training data
The first training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
The second training data consists of 160 crowdsourced examples of triples (S, C0, C1) consisting of one scenario and two possible contexts, where `U(S | C0) > U(S | C1)`.
## Training procedure
Starting from [utilitarian-deberta-01](https://huggingface.co/pfr/utilitarian-deberta-01), we fine-tune the model over the training data of 160 examples, with a learning rate of `1e-5`, a batch size of `8`, and for 2 epochs.
## Evaluation results
The model achieves ~80% accuracy over 40 crowdsourced examples, from the same distribution as the training data. |
BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF | BenevolenceMessiah | "2024-11-11T22:41:42Z" | 39 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-11T22:39:13Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-32b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-32b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-32b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-32b-instruct-q8_0.gguf -c 2048
```
|
stabilityai/japanese-stablelm-3b-4e1t-instruct | stabilityai | "2024-04-26T03:20:42Z" | 682 | 31 | transformers | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"japanese-stablelm",
"causal-lm",
"custom_code",
"ja",
"arxiv:2307.09288",
"arxiv:2104.09864",
"arxiv:2204.06745",
"arxiv:1607.06450",
"arxiv:1910.07467",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-10-16T07:50:31Z" | ---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
license: apache-2.0
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese StableLM-3B-4E1T Instruct
## Model Description
This is a 3B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese StableLM-3B-4E1T Base](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base).
*If you are in search of a larger model, please check [Japanese Stable LM Instruct Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b)*.
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-3b-4e1t-instruct")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/japanese-stablelm-3b-4e1t-instruct",
trust_remote_code=True,
torch_dtype="auto",
)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
def build_prompt(user_query, inputs="", sep="\n\n### "):
sys_msg = "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。"
p = sys_msg
roles = ["指示", "応答"]
msgs = [": \n" + user_query, ": \n"]
if inputs:
roles.insert(1, "入力")
msgs.insert(1, ": \n" + inputs)
for role, msg in zip(roles, msgs):
p += sep + role + msg
return p
# Infer with prompt without any additional input
user_inputs = {
"user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
"inputs": "情けは人のためならず"
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=256,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Japanese StableLM-3B-4E1T Instruct` model is an auto-regressive language model based on the transformer decoder architecture.
* **Language(s)**: Japanese
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
### Model Architecture
The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,795,443,200 | 2560 | 32 | 32 | 4096 |
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
* **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)).
* **Tokenizer**: GPT-NeoX ([Black et al., 2022](https://arxiv.org/abs/2204.06745)).
### Training Datasets
- [Japanese translation of the Databricks Dolly-15k dataset](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Japanese translation of the subset of the Anthropic HH dataset](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [Wikinews](https://ja.wikinews.org/wi) [subset](https://huggingface.co/datasets/fujiki/llm-japanese-dataset_wikinews) of the [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Credits
The fine-tuning was carried out by [Fujiki Nakamura](https://huggingface.co/fujiki).
Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), [Naoki Orii](https://huggingface.co/mrorii), and [Takuya Akiba](https://huggingface.co/iwiwi).
## Acknowledgements
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
|
Cfmaley/Mistral-7B-text-to-sql-flash-attention-2 | Cfmaley | "2024-04-13T16:50:11Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-12T21:16:31Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: Mistral-7B-text-to-sql-flash-attention-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-text-to-sql-flash-attention-2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf | RichardErkhov | "2024-09-27T12:37:44Z" | 20 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-27T09:55:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918 - GGUF
- Model creator: https://huggingface.co/KONIexp/
- Original model: https://huggingface.co/KONIexp/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q2_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q2_K.gguf) | Q2_K | 2.96GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K.gguf) | Q3_K | 3.74GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_0.gguf) | Q4_0 | 4.34GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_K.gguf) | Q4_K | 4.58GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q4_1.gguf) | Q4_1 | 4.78GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_0.gguf) | Q5_0 | 5.21GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_K.gguf) | Q5_K | 5.34GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q5_1.gguf) | Q5_1 | 5.65GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q6_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q6_K.gguf) | Q6_K | 6.14GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q8_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_full_data_20240918.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HakimEbd/my-awesome-model3 | HakimEbd | "2025-01-27T09:10:54Z" | 7 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-to-image",
"license:mit",
"region:us"
] | text-to-image | "2025-01-27T09:10:52Z" | ---
license: mit
pipeline_tag: text-to-image
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: your-repo-url
- Docs: [More Information Needed] |
TheBloke/Valkyrie-V1-AWQ | TheBloke | "2023-12-23T12:49:40Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:cookinai/Valkyrie-V1",
"base_model:quantized:cookinai/Valkyrie-V1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-12-23T12:31:57Z" | ---
base_model: cookinai/Valkyrie-V1
inference: false
license: apache-2.0
model_creator: John Smith
model_name: Valkyrie v1
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Valkyrie v1 - AWQ
- Model creator: [John Smith](https://huggingface.co/cookinai)
- Original model: [Valkyrie v1](https://huggingface.co/cookinai/Valkyrie-V1)
<!-- description start -->
## Description
This repo contains AWQ model files for [John Smith's Valkyrie v1](https://huggingface.co/cookinai/Valkyrie-V1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Valkyrie-V1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Valkyrie-V1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Valkyrie-V1-GGUF)
* [John Smith's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cookinai/Valkyrie-V1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Valkyrie-V1-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Valkyrie-V1-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Valkyrie-V1-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Valkyrie-V1-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Valkyrie-V1-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Valkyrie-V1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Valkyrie-V1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: John Smith's Valkyrie v1
Slerp merge of mindy-labs/mindy-7b-v2 with jondurbin/bagel-dpo-7b-v0.1. This model was then slerp merged with rishiraj/CatPPT.
Heard some talk of jondurbin/bagel-dpo-7b-v0.1 in the community and it sounds intresting. Merged it with two high preforming models to get cookinai/Valkyrie-V1
Slerp 1:
```.yaml:
slices:
- sources:
- model: jondurbin/bagel-dpo-7b-v0.1
layer_range: [0, 32]
- model: mindy-labs/mindy-7b-v2
layer_range: [0, 32]
merge_method: slerp
base_model: mindy-labs/mindy-7b-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
Slerp 2:
```.yaml:
slices:
- sources:
- model: previous/model/path
layer_range: [0, 32]
- model: rishiraj/CatPPT
layer_range: [0, 32]
merge_method: slerp
base_model: previous/model/path
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
jondurbin/airoboros-l2-70b-2.1-creative | jondurbin | "2023-08-30T23:01:15Z" | 17 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-30T19:07:51Z" | ---
license: llama2
---
This is a merge of llama-2-70b with the "creative" adapter from https://hf.co/jondurbin/airoboros-lmoe-70b-2.1
Basically, it's using a subset of the airoboros 2.1 training that is specifically focused on creative tasks, such as writing, roleplay, etc. |
sachinsahu/2008_Sichuan_earthquake-clustered | sachinsahu | "2023-02-05T05:50:18Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-02-05T05:30:10Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sachinsahu/2008_Sichuan_earthquake-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sachinsahu/2008_Sichuan_earthquake-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3953
- Train End Logits Accuracy: 0.9132
- Train Start Logits Accuracy: 0.7986
- Validation Loss: 0.6470
- Validation End Logits Accuracy: 0.8947
- Validation Start Logits Accuracy: 0.7368
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3953 | 0.9132 | 0.7986 | 0.6470 | 0.8947 | 0.7368 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mradermacher/yi-gutenberg-9B-GGUF | mradermacher | "2024-05-21T02:14:32Z" | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:nbeerbower/yi-gutenberg-9B",
"base_model:quantized:nbeerbower/yi-gutenberg-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-19T23:37:04Z" | ---
base_model: nbeerbower/yi-gutenberg-9B
datasets:
- jondurbin/gutenberg-dpo-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/yi-gutenberg-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/yi-gutenberg-9B-GGUF/resolve/main/yi-gutenberg-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JacksonBrune/4c14b5fb-21c2-4d87-bca3-aa9f440ea2f1 | JacksonBrune | "2025-01-20T13:50:52Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/abcd8351-bb0a-4b86-9251-81ff047d7475",
"base_model:adapter:samoline/abcd8351-bb0a-4b86-9251-81ff047d7475",
"region:us"
] | null | "2025-01-20T13:46:49Z" | ---
library_name: peft
base_model: samoline/abcd8351-bb0a-4b86-9251-81ff047d7475
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4c14b5fb-21c2-4d87-bca3-aa9f440ea2f1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/abcd8351-bb0a-4b86-9251-81ff047d7475
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/4c14b5fb-21c2-4d87-bca3-aa9f440ea2f1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a407c9d1-9dba-449c-937f-934464814b5f
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a407c9d1-9dba-449c-937f-934464814b5f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4c14b5fb-21c2-4d87-bca3-aa9f440ea2f1
This model is a fine-tuned version of [samoline/abcd8351-bb0a-4b86-9251-81ff047d7475](https://huggingface.co/samoline/abcd8351-bb0a-4b86-9251-81ff047d7475) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6778 | 0.0002 | 1 | 1.6577 |
| 1.4453 | 0.0006 | 3 | 1.6513 |
| 1.3372 | 0.0012 | 6 | 1.5905 |
| 1.1215 | 0.0018 | 9 | 1.4680 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kartmannXu/deepseek-7b-ch-0.3-tuned | kartmannXu | "2025-03-23T13:32:44Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-23T13:32:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Coder-AN/StreakNet-Models | Coder-AN | "2024-04-16T04:51:09Z" | 0 | 1 | null | [
"arxiv:2404.09158",
"license:apache-2.0",
"region:us"
] | null | "2024-04-14T02:57:30Z" | ---
license: apache-2.0
frameworks:
- Pytorch
tasks:
- underwater laser imaging
---
<div align="center"><img src="./assets/streaknet_logo.png" width="400"></div><br>
<div align="center"><img src="./assets/overview.jpg"></div>
## Introduction
In this paper, we introduce StreakNet-Arch, a novel signal processing architecture designed for Underwater Carrier LiDAR-Radar (UCLR) imaging systems, to address the limitations in scatter suppression and real-time imaging. StreakNet-Arch formulates the signal processing as a real-time, end-to-end binary classification task, enabling real-time image acquisition. To achieve this, we leverage Self-Attention networks and propose a novel Double Branch Cross Attention (DBC-Attention) mechanism that surpasses the performance of traditional methods. Furthermore, we present a method for embedding streak-tube camera images into attention networks, effectively acting as a learned bandpass filter. To facilitate further research, we contribute a publicly available streak-tube camera image dataset. The dataset contains 2,695,168 real-world underwater 3D point cloud data. These advancements significantly improve UCLR capabilities, enhancing its performance and applicability in underwater imaging tasks.
For further details, please refer to our [paper](https://arxiv.org/abs/2404.09158).
|
logasja/auramask-vgg-ashby | logasja | "2025-03-10T16:44:59Z" | 0 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | "2025-03-10T16:44:32Z" | ---
library_name: keras
datasets:
- logasja/FDF
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
pipeline_tag: image-to-image
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/96ab8e61346979b3d192883c176d090f)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 16,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
nitidpong/water-meter-segmentation-Unet-efficientnet-b4-ReduceLROnPlateau | nitidpong | "2024-09-25T09:27:08Z" | 6 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | "2024-09-25T09:26:59Z" | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Unet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "efficientnet-b4",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_use_batchnorm": True,
"decoder_channels": (256, 128, 64, 32, 16),
"decoder_attention_type": None,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7447869777679443,
"test_dataset_iou": 0.6949245929718018
}
]
```
## Dataset
Dataset name: water-meter
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
PrunaAI/resmlp_12_224.fb_distilled_in1k-turbo-green-smashed | PrunaAI | "2024-08-02T15:33:14Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-10T08:33:58Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir resmlp_12_224.fb_distilled_in1k-turbo-green-smashed
huggingface-cli download PrunaAI/resmlp_12_224.fb_distilled_in1k-turbo-green-smashed --local-dir resmlp_12_224.fb_distilled_in1k-turbo-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "resmlp_12_224.fb_distilled_in1k-turbo-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "resmlp_12_224.fb_distilled_in1k-turbo-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model resmlp_12_224.fb_distilled_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
tomasonjo/text2cypher-demo-16bit-gguf | tomasonjo | "2024-05-17T14:52:24Z" | 40 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"dataset:tomasonjo/text2cypher-gpt4o-clean",
"base_model:tomasonjo/text2cypher-demo-16bit",
"base_model:quantized:tomasonjo/text2cypher-demo-16bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-17T14:38:02Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: tomasonjo/text2cypher-demo-16bit
datasets:
- tomasonjo/text2cypher-gpt4o-clean
---
# Uploaded model
- **Developed by:** tomasonjo
- **License:** apache-2.0
- **Finetuned from model :** tomasonjo/text2cypher-demo-16bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
lesso/999382b3-3910-4087-a652-48853257a2bb | lesso | "2025-02-06T10:07:52Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2025-02-06T09:39:43Z" | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 999382b3-3910-4087-a652-48853257a2bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- f6b98c81056863ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f6b98c81056863ed_train_data.json
type:
field_instruction: related_work
field_output: abstract
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/999382b3-3910-4087-a652-48853257a2bb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001008
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/G.O.D/f6b98c81056863ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9c43eb5e-5f27-4f76-bcce-a2419144bfc6
wandb_project: new-08
wandb_run: your_name
wandb_runid: 9c43eb5e-5f27-4f76-bcce-a2419144bfc6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 999382b3-3910-4087-a652-48853257a2bb
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001008
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2723 | 0.0002 | 1 | 2.3789 |
| 2.3788 | 0.0104 | 50 | 2.2900 |
| 2.4969 | 0.0208 | 100 | 2.2765 |
| 2.4964 | 0.0312 | 150 | 2.2674 |
| 2.3601 | 0.0417 | 200 | 2.2636 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh3p7 | silviasapora | "2025-03-11T09:31:25Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-11T06:52:20Z" | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh3p7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/kp5vd4do)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
powerpuf-bot/wangchanberta-xet_hyp-params | powerpuf-bot | "2024-03-13T06:19:21Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"question-answering",
"generated_from_trainer",
"base_model:Thammarak/wangchanBERTa-QA-thaiqa_squad",
"base_model:finetune:Thammarak/wangchanBERTa-QA-thaiqa_squad",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-03-12T15:24:20Z" | ---
base_model: Thammarak/wangchanBERTa-QA-thaiqa_squad
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-xet_hyp-params
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-xet_hyp-params
This model is a fine-tuned version of [Thammarak/wangchanBERTa-QA-thaiqa_squad](https://huggingface.co/Thammarak/wangchanBERTa-QA-thaiqa_squad) on the **Dataxet FAQs** dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0098 | 1.0 | 187 | 0.0195 |
| 0.02 | 2.0 | 374 | 0.0194 |
| 0.022 | 3.0 | 561 | 0.0194 |
| 0.0193 | 4.0 | 748 | 0.0194 |
| 0.0146 | 5.0 | 935 | 0.0194 |
| 0.0188 | 6.0 | 1122 | 0.0194 |
| 0.0296 | 7.0 | 1309 | 0.0194 |
| 0.0244 | 8.0 | 1496 | 0.0193 |
| 0.0035 | 9.0 | 1683 | 0.0193 |
| 0.0153 | 10.0 | 1870 | 0.0194 |
| 0.0188 | 11.0 | 2057 | 0.0193 |
| 0.0171 | 12.0 | 2244 | 0.0193 |
| 0.0415 | 13.0 | 2431 | 0.0194 |
| 0.0115 | 14.0 | 2618 | 0.0194 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
VISAI-AI/nitibench-ccl-human-finetuned-bge-m3 | VISAI-AI | "2025-03-06T16:24:53Z" | 39 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"sentence-similarity",
"feature-extraction",
"th",
"dataset:airesearch/WangchanX-Legal-ThaiCCL-RAG",
"dataset:VISAI-AI/nitibench",
"arxiv:2502.10868",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-15T06:13:43Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
license: mit
datasets:
- airesearch/WangchanX-Legal-ThaiCCL-RAG
- VISAI-AI/nitibench
language:
- th
base_model:
- BAAI/bge-m3
---
# Human-Finetuned BGE-M3 CCL
**[[📄 Technical Report](https://arxiv.org/pdf/2502.10868)]**
This is a finetuned [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) model on [`airesearch/WangchanX-Legal-ThaiCCL-RAG`](https://huggingface.co/datasets/airesearch/WangchanX-Legal-ThaiCCL-RAG) queries.
## Finetuning Details
Apart from the original [`airesearch/WangchanX-Legal-ThaiCCL-RAG`](https://huggingface.co/datasets/airesearch/WangchanX-Legal-ThaiCCL-RAG) which requires human to rerank and remove irrelevant documents, the model was finetuned on a completely automated environment.
Specifically, given the query in the WangchanX-Legal-ThaiCCL-RAG dataset and a set of law sections to be retrieved, we follow the following procedure:
1. Use [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) to retrieve N positive law sections based on thresholding score of 0.8
2. Among those N documents, we use [`BAAI/bge-reranker-v2-m3`](https://huggingface.co/BAAI/bge-reranker-v2-m3) to rerank documents and filtered any document that reranker scores less than 0.8 - achieving final positive law sections
3. Using positives from (2), we finetuned BGE-M3 model
## Model Performance
| **Dataset** | **Top-K** | **HR@k** | **Multi HR@k** | **Recall@k** | **MRR@k** | **Multi MRR@k** |
|:----------------:|:---------:|:-------:|:-------------:|:-----------:|:--------:|:---------------:|
| **NitiBench-CCL** | 1 | 0.735 | – | 0.735 | 0.735 | – |
| **NitiBench-CCL** | 5 | 0.906 | – | 0.906 | 0.805 | – |
| **NitiBench-CCL** | 10 | 0.938 | – | 0.938 | 0.809 | – |
| **NitiBench-Tax**| 1 | 0.480 | 0.140 | 0.255 | 0.480 | 0.255 |
| **NitiBench-Tax**| 5 | 0.740 | 0.220 | 0.411 | 0.565 | 0.320 |
| **NitiBench-Tax**| 10 | 0.800 | 0.280 | 0.499 | 0.574 | 0.333 |
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('VISAI-AI/nitibench-ccl-human-finetuned-bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('VISAI-AI/nitibench-ccl-human-finetuned-bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["สถาบันทางการเงินสามารถลงทุนในหลักทรัพย์ เป็นอัตราส่วนร้อยละสิบของเงินกองทุนทั้งหมดของสถาบันการเงินนั้น สำหรับการถือหรือมีหุ้นในทุกบริษัทรวมกันได้หรือไม่?",
"ในกรณีที่ธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการของสถาบันการเงิน เนื่องสถาบันการเงินดำรงเงินกองทุนต่ำกว่าร้อยละสามสิบห้าของอัตราตามที่กำหนด จะต้องนำเสนอต่อบุคคลใดหรือหน่วยงานใดเพื่อเพิกถอนใบอนุญาตของสถาบันการเงินนั้น"]
sentences_2 = ["พระราชบัญญัติธุรกิจสถาบันการเงิน พ.ศ. 2551 มาตรา 33 ภายใต้บังคับมาตรา 34 และมาตรา 35 ให้สถาบันการเงินลงทุนในหลักทรัพย์เพื่อเป็นกรรมสิทธิ์ของตนได้ ตามหลักเกณฑ์ที่ธนาคารแห่งประเทศไทยประกาศกำหนด",
"พระราชบัญญัติธุรกิจสถาบันการเงิน พ.ศ. 2551 มาตรา 97 ในกรณีที่สถาบันการเงินดำรงเงินกองทุนต่ำกว่าร้อยละสามสิบห้าของอัตราตามที่กำหนดในมาตรา 30 ให้ธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการของสถาบันการเงินนั้น เว้นแต่ในกรณีที่ธนาคารแห่งประเทศไทยเห็นว่าการมีคำสั่งปิดกิจการจะก่อให้เกิดผลกระทบ หรือความเสียหายต่อระบบเศรษฐกิจโดยรวมอย่างรุนแรง ธนาคารแห่งประเทศไทยอาจยังไม่สั่งปิดกิจการของสถาบันการเงินก็ได้\nเมื่อธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการตามวรรคหนึ่งแล้ว ให้เสนอรัฐมนตรีเพิกถอนใบอนุญาตของสถาบันการเงินนั้น"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'สถาบัน': 0.126, 'การเงิน': 0.10956, 'สามารถ': 0.07, 'ลงทุน': 0.1417, 'ใน': 0.01715, 'หลัก': 0.0758, 'ทรัพย์': 0.1702, 'อัตรา': 0.04926, 'ส่วน': 0.06107, 'ร้อยละ': 0.09, 'สิบ': 0.14, 'เงิน': 0.05026, 'กองทุน': 0.1205, 'ทั้งหมด': 0.03644, 'ถือ': 0.0987, 'หุ้น': 0.0928, 'ในทุก': 0.04883, 'บริษัท': 0.0999, 'รวม': 0.0835, 'กันได้': 0.09814, 'หรือไม่': 0.0398},
# {'กรณี': 0.0323, 'ธนาคาร': 0.08136, 'แห่งประเทศไทย': 0.151, 'คําสั่ง': 0.161, 'ปิด': 0.1583, 'กิจการ': 0.1199, 'สถาบัน': 0.08545, 'การเงิน': 0.1334, 'เนื่อง': 0.006992, 'ดํารง': 0.1523, 'เงิน': 0.12146, 'กองทุน': 0.1776, 'ต่ํากว่า': 0.1335, 'ร้อยละ': 0.10126, 'สาม': 0.02498, 'ห้า': 0.1158, 'อัตรา': 0.12256, 'กําหนด': 0.0572, 'จะต้อง': 0.07074, 'นําเสนอ': 0.1752, 'ต่อ': 0.0696, 'บุคคล': 0.0817, 'ใด': 0.0577, 'หรือ': 0.0248, 'หน่วยงาน': 0.076, 'เพ': 0.02034, 'ิก': 0.0921, 'ถอน': 0.1582, 'ใบ': 0.04617, 'อนุญาต': 0.179}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.10838508605957031
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.06803131103515625
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('VISAI-AI/nitibench-ccl-human-finetuned-bge-m3', use_fp16=True)
sentences_1 = ["สถาบันทางการเงินสามารถลงทุนในหลักทรัพย์ เป็นอัตราส่วนร้อยละสิบของเงินกองทุนทั้งหมดของสถาบันการเงินนั้น สำหรับการถือหรือมีหุ้นในทุกบริษัทรวมกันได้หรือไม่?",
"ในกรณีที่ธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการของสถาบันการเงิน เนื่องสถาบันการเงินดำรงเงินกองทุนต่ำกว่าร้อยละสามสิบห้าของอัตราตามที่กำหนด จะต้องนำเสนอต่อบุคคลใดหรือหน่วยงานใดเพื่อเพิกถอนใบอนุญาตของสถาบันการเงินนั้น"]
sentences_2 = ["พระราชบัญญัติธุรกิจสถาบันการเงิน พ.ศ. 2551 มาตรา 33 ภายใต้บังคับมาตรา 34 และมาตรา 35 ให้สถาบันการเงินลงทุนในหลักทรัพย์เพื่อเป็นกรรมสิทธิ์ของตนได้ ตามหลักเกณฑ์ที่ธนาคารแห่งประเทศไทยประกาศกำหนด",
"พระราชบัญญัติธุรกิจสถาบันการเงิน พ.ศ. 2551 มาตรา 97 ในกรณีที่สถาบันการเงินดำรงเงินกองทุนต่ำกว่าร้อยละสามสิบห้าของอัตราตามที่กำหนดในมาตรา 30 ให้ธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการของสถาบันการเงินนั้น เว้นแต่ในกรณีที่ธนาคารแห่งประเทศไทยเห็นว่าการมีคำสั่งปิดกิจการจะก่อให้เกิดผลกระทบ หรือความเสียหายต่อระบบเศรษฐกิจโดยรวมอย่างรุนแรง ธนาคารแห่งประเทศไทยอาจยังไม่สั่งปิดกิจการของสถาบันการเงินก็ได้\nเมื่อธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการตามวรรคหนึ่งแล้ว ให้เสนอรัฐมนตรีเพิกถอนใบอนุญาตของสถาบันการเงินนั้น"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# tensor(0.5813)
# tensor(0.5718)
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('VISAI-AI/nitibench-ccl-human-finetuned-bge-m3', use_fp16=True)
sentences_1 = ["สถาบันทางการเงินสามารถลงทุนในหลักทรัพย์ เป็นอัตราส่วนร้อยละสิบของเงินกองทุนทั้งหมดของสถาบันการเงินนั้น สำหรับการถือหรือมีหุ้นในทุกบริษัทรวมกันได้หรือไม่?",
"ในกรณีที่ธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการของสถาบันการเงิน เนื่องสถาบันการเงินดำรงเงินกองทุนต่ำกว่าร้อยละสามสิบห้าของอัตราตามที่กำหนด จะต้องนำเสนอต่อบุคคลใดหรือหน่วยงานใดเพื่อเพิกถอนใบอนุญาตของสถาบันการเงินนั้น"]
sentences_2 = ["พระราชบัญญัติธุรกิจสถาบันการเงิน พ.ศ. 2551 มาตรา 33 ภายใต้บังคับมาตรา 34 และมาตรา 35 ให้สถาบันการเงินลงทุนในหลักทรัพย์เพื่อเป็นกรรมสิทธิ์ของตนได้ ตามหลักเกณฑ์ที่ธนาคารแห่งประเทศไทยประกาศกำหนด",
"พระราชบัญญัติธุรกิจสถาบันการเงิน พ.ศ. 2551 มาตรา 97 ในกรณีที่สถาบันการเงินดำรงเงินกองทุนต่ำกว่าร้อยละสามสิบห้าของอัตราตามที่กำหนดในมาตรา 30 ให้ธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการของสถาบันการเงินนั้น เว้นแต่ในกรณีที่ธนาคารแห่งประเทศไทยเห็นว่าการมีคำสั่งปิดกิจการจะก่อให้เกิดผลกระทบ หรือความเสียหายต่อระบบเศรษฐกิจโดยรวมอย่างรุนแรง ธนาคารแห่งประเทศไทยอาจยังไม่สั่งปิดกิจการของสถาบันการเงินก็ได้\nเมื่อธนาคารแห่งประเทศไทยมีคำสั่งปิดกิจการตามวรรคหนึ่งแล้ว ให้เสนอรัฐมนตรีเพิกถอนใบอนุญาตของสถาบันการเงินนั้น"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.5812647342681885, 0.5717734098434448, 0.6460118889808655, 0.8784525990486145],
# 'sparse': [0.1083984375, 0.07684326171875, 0.07061767578125, 0.314208984375],
# 'dense': [0.61865234375, 0.58935546875, 0.666015625, 0.8916015625],
# 'sparse+dense': [0.4485676884651184, 0.41851806640625, 0.4675496518611908, 0.6991373896598816],
# 'colbert+sparse+dense': [0.5016465187072754, 0.47982022166252136, 0.538934588432312, 0.7708634734153748]
# }
```
## Acknowledgement
We sincerely appreciate the generous support from the WangchanX program sponsors—PTT, SCB, and SCBX—whose funding made this project possible. We are also grateful for the invaluable collaboration with VISTEC, which was crucial in bringing this project to fruition.
Thanks to Pirat Pothavorn for evaluating the model performance on NitiBench, Supavish Punchun for finetuning the model. Additionally, we thank you all authors of this open-sourced project.
## Citation
### BibTeX
```
@misc{akarajaradwong2025nitibenchcomprehensivestudiesllm,
title={NitiBench: A Comprehensive Studies of LLM Frameworks Capabilities for Thai Legal Question Answering},
author={Pawitsapak Akarajaradwong and Pirat Pothavorn and Chompakorn Chaksangchaichot and Panuthep Tasawong and Thitiwat Nopparatbundit and Sarana Nutanong},
year={2025},
eprint={2502.10868},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.10868},
}
``` |
WonderingNut/TheNuts | WonderingNut | "2022-10-17T21:38:06Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2022-10-17T21:38:06Z" | ---
license: creativeml-openrail-m
---
|
arashghsz/ipxact-generator | arashghsz | "2025-03-24T15:55:13Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2025-03-24T15:50:26Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/arashghsz/ipxact-generator/8e8089a989dac9c52a9bc3453d5a50b2d1d213e6/README.md?%2Farashghsz%2Fipxact-generator%2Fresolve%2Fmain%2FREADME.md=&etag=%22b2a59cdb9ad633e97d99535598823372fce40e12%22 |
AswanthCManoj/azma-zephyr-7b-beta-instruct | AswanthCManoj | "2024-01-10T18:49:22Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | "2024-01-10T18:49:15Z" | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
TheBloke/AlpacaCielo2-7B-8K-GGML | TheBloke | "2023-09-27T13:01:13Z" | 5 | 7 | transformers | [
"transformers",
"llama",
"base_model:totally-not-an-llm/AlpacaCielo2-7b-8k",
"base_model:finetune:totally-not-an-llm/AlpacaCielo2-7b-8k",
"license:llama2",
"region:us"
] | null | "2023-08-09T19:05:55Z" | ---
license: llama2
model_name: AlpacaCielo2 7B 8K
inference: false
model_creator: totally-not-an-llm
model_link: https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k
model_type: llama
quantized_by: TheBloke
base_model: totally-not-an-llm/AlpacaCielo2-7b-8k
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# AlpacaCielo2 7B 8K - GGML
- Model creator: [totally-not-an-llm](https://huggingface.co/totally-not-an-llm)
- Original model: [AlpacaCielo2 7B 8K](https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k)
## Description
This repo contains GGML format model files for [totally-not-an-llm's AlpacaCielo2 7B 8K](https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
### About GGML
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML)
* [totally-not-an-llm's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k)
## Prompt template: System-Human-Assistant-Hashes
```
### Sytem: {system_message}
### Human: {prompt}
### Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [alpacacielo2-7b-8k.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q2_K.bin) | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| [alpacacielo2-7b-8k.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| [alpacacielo2-7b-8k.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [alpacacielo2-7b-8k.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| [alpacacielo2-7b-8k.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.83 GB| 6.33 GB | Original quant method, 4-bit. |
| [alpacacielo2-7b-8k.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| [alpacacielo2-7b-8k.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| [alpacacielo2-7b-8k.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.24 GB| 6.74 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| [alpacacielo2-7b-8k.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.65 GB| 7.15 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| [alpacacielo2-7b-8k.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| [alpacacielo2-7b-8k.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| [alpacacielo2-7b-8k.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| [alpacacielo2-7b-8k.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q6_K.bin) | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| [alpacacielo2-7b-8k.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML/blob/main/alpacacielo2-7b-8k.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.13 GB| 9.63 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
```
./main -t 10 -ngl 32 -m alpacacielo2-7b-8k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Sytem: You are a story writing assistant.\n### Human: Write a story about llamas\n### Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: totally-not-an-llm's AlpacaCielo2 7B 8K
# AlpacaCielo2-7b-8k
<figure>
<img src="https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b/resolve/main/alpaca.png" alt="cute cloud alpaca">
<figcaption style="font-size: 1em;"><i>"super cute baby alpaca laying on a cloud", Model: epicrealism_pureEvolutionV3</i></figcaption>
</figure>
AlpacaCielo2-7b-8k is the second version of the AlpacaCielo series. It is a llama-2 based model designed for creative tasks, such as storytelling and roleplay, while still doing well with other chatbot purposes. It is a triple model merge of Nous-Hermes + Guanaco + LimaRP. While it is mostly *"uncensored"*, it still inherits some alignment from Guanaco.
[GPTQ quants](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GPTQ)<br>
[GGML quants](https://huggingface.co/TheBloke/AlpacaCielo2-7B-8K-GGML)<br>
(Courtesy of TheBloke)
### Differences from V1:
- Double context (4k->8k)
- Better roleplaying abilities
**Performs well with custom prompt format:**
```
### System: {system prompt}
### Human: {prompt}
### Assistant:
```
### Note for system prompt:
The model understands it well and it works great if you want roleplay, but it still likes to be an assistant, so you should nudge it in the right direction. For example:
```
### System: Roleplay as a pirate
### Human: hello
### Assistant: Ahoy, matey! How can I assist you today?
```
### vs.
```
### System: Roleplay as a pirate (not assistant!)
### Human: hello
### Assistant: Arrgh, matey! I be the Captain of this here ship. What business do ye have with me?
```
You could also just use LimaRP prompt format.
*Thanks to previous similar models such as Alpacino, Alpasta, and AlpacaDente for inspiring the creation of this model. Thanks also to the creators of the models involved in the merge. Original models:*
- [Hermes-LLongMA-2](https://huggingface.co/conceptofmind/Hermes-LLongMA-2-7b-8k)
- [Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-7b-guanaco-qlora)
- [LimaRP LoRA](https://huggingface.co/lemonilia/limarp-llama2)
|
SaiChamakura/fine-tuned-visionllama100_0.6dropout | SaiChamakura | "2025-02-13T09:20:20Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-02-12T19:53:37Z" | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: transformers
model_name: fine-tuned-visionllama100_0.6dropout
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for fine-tuned-visionllama100_0.6dropout
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SaiChamakura/fine-tuned-visionllama100_0.6dropout", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
QuantFactory/Orca-2-13b-GGUF | QuantFactory | "2024-10-04T11:17:29Z" | 95 | 1 | null | [
"gguf",
"orca",
"orca2",
"microsoft",
"text-generation",
"arxiv:2311.11045",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-04T09:42:29Z" |
---
pipeline_tag: text-generation
tags:
- orca
- orca2
- microsoft
license: other
license_name: microsoft-research-license
license_link: LICENSE
---
[](https://hf.co/QuantFactory)
# QuantFactory/Orca-2-13b-GGUF
This is quantized version of [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) created using llama.cpp
# Original Model Card
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning.
Note that:
1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training process,
including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-13b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help prevent content harms. Azure AI Content Safety is a content moderation platform
that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety,
we can moderate the model output by scanning it for sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-13b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
John6666/quadpipe-qp-sdxl-v3-sdxl | John6666 | "2025-01-05T10:39:24Z" | 421 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"photography",
"realism",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-01-05T10:34:17Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- photography
- realism
---
Original model is [here](https://civitai.com/models/996342?modelVersionId=1230000).
This model created by [QuadPipe](https://civitai.com/user/QuadPipe).
|
Druidchoi/Bllossom-Druidchoi-gguf | Druidchoi | "2024-09-19T19:21:15Z" | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:quantized:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-19T19:18:59Z" | ---
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Druidchoi
- **License:** apache-2.0
- **Finetuned from model :** MLP-KTLim/llama-3-Korean-Bllossom-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF | mradermacher | "2025-01-23T11:00:09Z" | 331 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Triangle104/GoldenMistral-Nemo-Humane-Gutenberg",
"base_model:quantized:Triangle104/GoldenMistral-Nemo-Humane-Gutenberg",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-13T17:40:10Z" | ---
base_model: Triangle104/GoldenMistral-Nemo-Humane-Gutenberg
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Triangle104/GoldenMistral-Nemo-Humane-Gutenberg
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GoldenMistral-Nemo-Humane-Gutenberg-GGUF/resolve/main/GoldenMistral-Nemo-Humane-Gutenberg.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zxczxczxcz/nvidia-acemath-1.5b-full | zxczxczxcz | "2025-02-13T07:00:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-13T06:59:34Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MawaredHR_Deepseek-GGUF | mradermacher | "2025-01-30T11:39:43Z" | 335 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:MawaredHR/MawaredHR_Deepseek",
"base_model:quantized:MawaredHR/MawaredHR_Deepseek",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-30T06:41:21Z" | ---
base_model: MawaredHR/MawaredHR_Deepseek
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MawaredHR/MawaredHR_Deepseek
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MawaredHR_Deepseek-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MawaredHR_Deepseek-GGUF/resolve/main/MawaredHR_Deepseek.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mx262/MiniMonkey | mx262 | "2024-11-14T14:49:00Z" | 112 | 6 | null | [
"safetensors",
"internvl_chat",
"custom_code",
"arxiv:2408.02034",
"license:mit",
"region:us"
] | null | "2024-08-23T02:28:18Z" | ---
license: mit
---
# Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models
<br>
<p align="center">
<img src="https://v1.ax1x.com/2024/08/13/7GXu34.png" width="300"/>
<p>
> [**Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models**](https://arxiv.org/abs/2408.02034)<br>
> Mingxin Huang, Yuliang Liu, Dingkang Liang, Lianwen Jin, Xiang Bai <br>
[](https://arxiv.org/abs/2408.02034)
[](http://vlrlab-monkey.xyz:7685)
[](https://huggingface.co/mx262/MiniMokney)
-----
**Mini-Monkey** is a lightweight MLLM that incorporates a plug-and-play method called multi-scale adaptive cropping strategy (MSAC). Mini-Monkey adaptively generates multi-scale representations, allowing it to select non-segmented objects from various scales. To mitigate the computational overhead introduced by MSAC, we propose a Scale Compression Mechanism (SCM), which effectively compresses image tokens. Mini-Monkey achieves state-of-the-art performance among 2B-parameter MLLMs. It not only demonstrates leading performance on a variety of general multimodal understanding tasks but also shows consistent improvements in document understanding capabilities. On the OCRBench, Mini-Monkey achieves a score of 802, outperforming 8B-parameter state-of-the-art model InternVL2-8B. Besides, our model and training strategy are very efficient, which can be trained with only eight RTX 3090.
# TODO
- [x] Open source code, weight, and data
- [x] Support training using 3090 GPUs (24Gb video memory)
- [ ] Mini-Monkey with different LLMs
# Model Zoo
Mini-Monkey was trained using 8 3090 GPUs on a dataset
| Model | #param | MME | RWQA | AI2D | CCB | SEED | HallB | POPE | MathVista | DocVQA | ChartQA | InfoVQA$ | TextVQA | OCRBench |
|-------|---------|-----|------|------|-----|------|-------|------|-----------|-------------------|-------------------|-------------------|----------------|----------|
| Mini-Gemini | 35B | 2141.0 | - | - | - | - | - | - | 43.3 | - | - | - | - | - |
| LLaVA-NeXT | 35B | 2028.0 | - | 74.9 | 49.2 | 75.9 | 34.8 | 89.6 | 46.5 | - | - | - | - | - |
| InternVL 1.2 | 40B | 2175.4 | 67.5 | 79.0 | 59.2 | 75.6 | 47.6 | 88.0 | 47.7 | - | - | - | - | - |
| InternVL 1.5 | 26B | 2187.8 | 66.0 | 80.7 | 69.8 | 76.0 | 49.3 | 88.3 | 53.5 | 90.9 | 83.8 | 72.5 | 80.6 | 724 |
| DeepSeek-VL | 1.7B | 1531.6 | 49.7 | 51.5 | 37.6 | 43.7 | 27.6 | 85.9 | 29.4 | - | - | - | - | - |
| Mini-Gemini | 2.2B | 1653.0 | - | - | - | - | - | - | 29.4 | - | - | - | - | - |
| Bunny-StableLM-2 | 2B | 1602.9 | - | - | - | 58.8 | - | 85.9 | - | - | - | - | - | - |
| MiniCPM-V-2 | 2.8B | 1808.6 | 55.8 | 62.9 | 48.0 | - | 36.1 | 86.3 | 38.7 | 71.9 | 55.6 | - | 74.1 | 605 |
| InternVL 2 | 2B | 1876.8 | 57.3 | 74.1 | 74.7 | 70.9 | 37.9 | 85.2 | 46.3 | 86.9 | 76.2 | 58.9 | 73.4 | 784 |
| Mini-Monkey (ours) | 2B | 1881.9 | 57.5 | 74.7 | 75.5 | 71.3 | 38.7 | 86.7 | 47.3 | 87.4 | 76.5 | 60.1 | 75.7 | 802 |
## Environment
```python
conda create -n minimonkey python=3.10
conda activate minimonkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey/project/mini_monkey
pip install -r requirements.txt
```
Install `flash-attn==2.3.6`:
```bash
pip install flash-attn==2.3.6 --no-build-isolation
```
Alternatively you can compile from source:
```bash
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
git checkout v2.3.6
python setup.py install
```
## Evaluate
We use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation.
## Inference
We provide an example of inference code [here](https://github.com/Yuliang-Liu/Monkey/blob/main/project/mini_monkey/demo.py)
## Train
### Prepare Training Datasets
Inspired by InternVL 1.2, we adopted a [LLaVA-ZH](https://huggingface.co/datasets/openbmb/llava_zh), [DVQA](https://github.com/kushalkafle/DVQA_dataset), [ChartQA](https://github.com/vis-nlp/ChartQA), [AI2D](https://allenai.org/data/diagrams), [DocVQA](https://www.docvqa.org/datasets), [GeoQA+](https://github.com/SCNU203/GeoQA-Plus), and [SynthDoG-EN](https://huggingface.co/datasets/naver-clova-ix/synthdog-en). Most of the data remains consistent with InternVL 1.2.
First, download the [annotation files](https://huggingface.co/OpenGVLab/InternVL/resolve/main/playground.zip) and place them in the `playground/opensource/` folder.
Second, download all the images we used.
- AI2D: [ai2d_images](https://drive.google.com/file/d/1dqqa3MnrxMXaU_K9JA6C83je32ibwdOY/view?usp=sharing) (provided by InternLM-XComposer)
- ChartQA: [ChartQA Dataset](https://huggingface.co/datasets/ahmed-masry/ChartQA/resolve/main/ChartQA%20Dataset.zip)
- COCO: [train2017](http://images.cocodataset.org/zips/train2017.zip)
- DocVQA: [train](https://datasets.cvc.uab.es/rrc/DocVQA/train.tar.gz), [val](https://datasets.cvc.uab.es/rrc/DocVQA/val.tar.gz), [test](https://datasets.cvc.uab.es/rrc/DocVQA/test.tar.gz)
- DVQA: [images](https://drive.google.com/file/d/1iKH2lTi1-QxtNUVRxTUWFvUvRHq6HAsZ/view)
- LLaVA-Pretrain: [images](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/resolve/main/images.zip)
- SynthDoG-EN: We only use 00000~00004 parquet files for now, with a total of 30K images. We provide the converted [images](https://huggingface.co/OpenGVLab/InternVL/resolve/main/synthdog-en-images.zip).
- GeoQA+: [GeoQA+](https://drive.google.com/file/d/1KL4_wIzr3p8XSKMkkLgYcYwCbb0TzZ9O/view) [images](https://huggingface.co/OpenGVLab/InternVL/resolve/main/geoqa%2B_images.zip)
Then, organize the data as follows in `playground/data`:
```none
playground/
├── opensource
│ ├── ai2d_train_12k.jsonl
│ ├── chartqa_train_18k.jsonl
│ ├── docvqa_train_10k.jsonl
│ ├── dvqa_train_200k.jsonl
│ ├── geoqa+.jsonl
│ ├── llava_instruct_150k_zh.jsonl
│ └── synthdog_en.jsonl
├── data
│ ├── ai2d
│ │ ├── abc_images
│ │ └── images
│ ├── chartqa
│ │ ├── test
│ │ ├── train
│ │ └── val
│ ├── coco
│ │ └── train2017
│ ├── docvqa
│ │ ├── test
│ │ ├── train
│ │ └── val
│ ├── dvqa
│ │ └── images
│ ├── llava
│ │ └── llava_pretrain
│ │ └── images
│ ├── synthdog-en
│ │ └── images
│ ├── geoqa+
│ │ └── images
```
Execute the training code:
```python
sh shell/minimonkey/minimonkey_finetune_full.sh
```
## Citing Mini-Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:
```BibTeX
@article{huang2024mini,
title={Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models},
author={Huang, Mingxin and Liu, Yuliang and Liang, Dingkang and Jin, Lianwen and Bai, Xiang},
journal={arXiv preprint arXiv:2408.02034},
year={2024}
}
```
## Copyright
We welcome suggestions to help us improve the Mini-Monkey. For any query, please contact Dr. Yuliang Liu: [email protected]. If you find something interesting, please also feel free to share with us through email or open an issue. |
S4nto/lora-dpo-finetuned-model-beta-0.1-rate-1e5-stage2-iter40000-sft | S4nto | "2024-05-16T06:14:38Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-16T06:04:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jeethu/DeepSeek-R1-Distill-Qwen-7B-PLLM | Jeethu | "2025-02-04T16:26:43Z" | 11 | 0 | mlc-llm | [
"mlc-llm",
"chat",
"text-generation",
"conversational",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:mit",
"region:us"
] | text-generation | "2025-02-04T16:23:20Z" | ---
license: mit
language:
- en
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
base_model_relation: quantized
library_name: mlc-llm
pipeline_tag: text-generation
tags:
- chat
---
4-bit GPTQ quantized version of [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B).
|
danieliser/Reinforce-Pixelcopter-PLE-v0-1 | danieliser | "2023-05-30T09:00:57Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-30T08:51:13Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 89.10 +/- 51.48
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ShenaoZ/0.001_idpo_same_scratch_iter_3 | ShenaoZ | "2024-04-14T21:24:31Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-14T17:53:31Z" | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_idpo_same_scratch_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_idpo_same_scratch_iter_3
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
google/owlv2-base-patch16-finetuned | google | "2024-10-31T14:56:18Z" | 1,815 | 3 | transformers | [
"transformers",
"pytorch",
"owlv2",
"zero-shot-object-detection",
"vision",
"arxiv:2306.09683",
"license:apache-2.0",
"region:us"
] | zero-shot-object-detection | "2023-10-13T09:37:34Z" | ---
license: apache-2.0
tags:
- vision
- zero-shot-object-detection
inference: false
---
# Model Card: OWLv2
## Model Details
The OWLv2 model (short for Open-World Localization) was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2, like OWL-ViT, is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries.
The model uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
### Model Date
June 2023
### Model Type
The model uses a CLIP backbone with a ViT-B/16 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective.
### Documents
- [OWLv2 Paper](https://arxiv.org/abs/2306.09683)
### Use with Transformers
```python
import requests
from PIL import Image
import torch
from transformers import Owlv2Processor, Owlv2ForObjectDetection
processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-finetuned")
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-finetuned")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to Pascal VOC Format (xmin, ymin, xmax, ymax)
results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
```
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, text-conditioned object detection. We also hope it can be used for interdisciplinary studies of the potential impact of such models, especially in areas that commonly require identifying objects whose label is unavailable during training.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
## Data
The CLIP backbone of the model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet. The prediction heads of OWL-ViT, along with the CLIP backbone, are fine-tuned on publicly available object detection datasets such as [COCO](https://cocodataset.org/#home) and [OpenImages](https://storage.googleapis.com/openimages/web/index.html).
(to be updated for v2)
### BibTeX entry and citation info
```bibtex
@misc{minderer2023scaling,
title={Scaling Open-Vocabulary Object Detection},
author={Matthias Minderer and Alexey Gritsenko and Neil Houlsby},
year={2023},
eprint={2306.09683},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
Triangle104/OpenThinker-7B-Q4_K_M-GGUF | Triangle104 | "2025-02-14T15:50:36Z" | 28 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"dataset:open-thoughts/open-thoughts-114k",
"base_model:open-thoughts/OpenThinker-7B",
"base_model:quantized:open-thoughts/OpenThinker-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-01T01:44:25Z" | ---
library_name: transformers
license: apache-2.0
base_model: open-thoughts/OpenThinker-7B
tags:
- llama-factory
- full
- generated_from_trainer
- llama-cpp
- gguf-my-repo
datasets:
- open-thoughts/open-thoughts-114k
model-index:
- name: OpenThinker-7B
results: []
---
# Triangle104/OpenThinker-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`open-thoughts/OpenThinker-7B`](https://huggingface.co/open-thoughts/OpenThinker-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/open-thoughts/OpenThinker-7B) for more details on the model.
---
This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the
OpenThoughts-114k dataset dataset.
The dataset is derived by distilling DeepSeek-R1 using the data pipeline available on github.
More info about the dataset can be found on the dataset card at OpenThoughts-114k dataset.
This model improves upon the Bespoke-Stratos-7B model, which used 17k examples (Bespoke-Stratos-17k dataset).
The numbers reported in the table below are evaluated with our open-source tool Evalchemy.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/OpenThinker-7B-Q4_K_M-GGUF --hf-file openthinker-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/OpenThinker-7B-Q4_K_M-GGUF --hf-file openthinker-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/OpenThinker-7B-Q4_K_M-GGUF --hf-file openthinker-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/OpenThinker-7B-Q4_K_M-GGUF --hf-file openthinker-7b-q4_k_m.gguf -c 2048
```
|
TheBloke/JanniesBasedLigma-L2-13B-GGUF | TheBloke | "2023-09-27T12:48:54Z" | 98 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Sao10K/JanniesBasedLigma-L2-13B",
"base_model:quantized:Sao10K/JanniesBasedLigma-L2-13B",
"license:llama2",
"region:us"
] | null | "2023-09-12T12:24:30Z" | ---
language:
- en
license: llama2
model_name: JanniesBasedLigma L2 13B
base_model: Sao10K/JanniesBasedLigma-L2-13B
inference: false
model_creator: Sao10k
model_type: llama
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# JanniesBasedLigma L2 13B - GGUF
- Model creator: [Sao10k](https://huggingface.co/Sao10k)
- Original model: [JanniesBasedLigma L2 13B](https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sao10k's JanniesBasedLigma L2 13B](https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF)
* [Sao10k's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [janniesbasedligma-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [janniesbasedligma-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [janniesbasedligma-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [janniesbasedligma-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [janniesbasedligma-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [janniesbasedligma-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [janniesbasedligma-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [janniesbasedligma-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [janniesbasedligma-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [janniesbasedligma-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [janniesbasedligma-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [janniesbasedligma-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/JanniesBasedLigma-L2-13B-GGUF/blob/main/janniesbasedligma-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/JanniesBasedLigma-L2-13B-GGUF and below it, a specific filename to download, such as: janniesbasedligma-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/JanniesBasedLigma-L2-13B-GGUF janniesbasedligma-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/JanniesBasedLigma-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/JanniesBasedLigma-L2-13B-GGUF janniesbasedligma-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m janniesbasedligma-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/JanniesBasedLigma-L2-13B-GGUF", model_file="janniesbasedligma-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sao10k's JanniesBasedLigma L2 13B

GGUF Quants:
https://huggingface.co/Sao10K/JanniesBasedLigma-L2-13B-GGUF
Based Model, Schizophrenic if there is no context. Surprisingly... It's not bad when you use an ongoing RP. It feels like your... regular model.
Prompt Format? Idk, I don't know any of this. LoRA'd the [Based Dataset](https://huggingface.co/datasets/ehartford/based) myself.
Merged the LoRAs [Ligma 13B](https://huggingface.co/kubernetes-bad/Ligma-L2-13b), [Jannie 13B](https://huggingface.co/v2ray/LLaMA-2-Jannie-13B-QLoRA) myself.
I recommend Vicuna 1.1, but other formats work fine.
```
USER: What is 9+10?
ASSISTANT:
```
Made while downloading various 70B models, Euryale-70B is halfway done, P1 complete, P2 otw.
<br>
<br>
<br>
Maybe this will help some of the Schizo Anons in /lmg.
Ty to all the feedback and support from other Anons.
EXAMPLES BELOW WITH NO CONTEXT / HISTORY, REPLIES ARE SOMEHOW UNRELATED TO QUESTION:



<!-- original-model-card end -->
|
btamm12/bert-base-uncased-finetuned-wls-manual-10ep-lower | btamm12 | "2023-09-02T15:50:08Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-09-02T15:47:54Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wls-manual-10ep-lower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wls-manual-10ep-lower
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1089 | 0.93 | 7 | 1.9417 |
| 1.5952 | 2.0 | 15 | 1.5688 |
| 1.4717 | 2.93 | 22 | 1.4364 |
| 1.3673 | 4.0 | 30 | 1.4096 |
| 1.2666 | 4.93 | 37 | 1.2430 |
| 1.2398 | 6.0 | 45 | 1.2435 |
| 1.2056 | 6.93 | 52 | 1.2533 |
| 1.1372 | 8.0 | 60 | 1.3034 |
| 1.1384 | 8.93 | 67 | 1.2087 |
| 1.1148 | 9.33 | 70 | 1.2141 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
semindan/xnli_m_bert_only_bg | semindan | "2023-01-07T14:23:26Z" | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:xnli",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-11-26T23:05:29Z" | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- xnli
metrics:
- accuracy
model-index:
- name: xnli_m_bert_only_bg
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: xnli
type: xnli
config: bg
split: train
args: bg
metrics:
- name: Accuracy
type: accuracy
value: 0.7365461847389558
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xnli_m_bert_only_bg
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2991
- Accuracy: 0.7365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6597 | 1.0 | 3068 | 0.6952 | 0.7052 |
| 0.5758 | 2.0 | 6136 | 0.6158 | 0.7422 |
| 0.4912 | 3.0 | 9204 | 0.6293 | 0.7486 |
| 0.4073 | 4.0 | 12272 | 0.6818 | 0.7353 |
| 0.3286 | 5.0 | 15340 | 0.7461 | 0.7438 |
| 0.2562 | 6.0 | 18408 | 0.8900 | 0.7337 |
| 0.1959 | 7.0 | 21476 | 0.9912 | 0.7333 |
| 0.1483 | 8.0 | 24544 | 1.0983 | 0.7285 |
| 0.1097 | 9.0 | 27612 | 1.1904 | 0.7333 |
| 0.0811 | 10.0 | 30680 | 1.2991 | 0.7365 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
finnstrom3693/cpsd-turbo-exp-base-v0.3 | finnstrom3693 | "2024-09-17T00:01:03Z" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-09-15T18:21:28Z" | ---
library_name: diffusers
inference: false
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alchemonaut/QuartetAnemoi-70B-t0.0001-GGUF | alchemonaut | "2024-02-10T08:00:35Z" | 12 | 4 | null | [
"gguf",
"merge",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2024-02-08T03:03:29Z" | ---
tags:
- merge
license: other
model-index:
- name: QuartetAnemoi-70B-t0.0001
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/QuartetAnemoi-70B-t0.0001
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001/resolve/main/anemoi.png>
# QuartetAnemoi-70B-t0.0001
A sequential merge using a custom algorithm (NearSwap) of:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2)
- [Aurora-Nights-70B-v1.0](https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0)
- [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1)
<br/>
In our testing, this model seems like a storyteller, as might be expected, but the changes from this merge are extremely soft. We were impressed that, unlike most models, at the end of a story it did not often use cliches such as "In the end", "And so", "beacon of hope", etc.
<br/>
This repo has the GGUF quants. Full weights are available at: [alchemonaut/QuartetAnemoi-70B-t0.0001](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001)
<br/>
<br/>
# NearSwap Algorithm
NearSwap retains most of the weights of the base model (Miqu), but when a weight is similar between the two, it is interpolated to the secondary model value. A parameter *t* specifies the sameness threshold. When the distance between two values is below *t*, the weight from the secondary model is used.
This version of the model uses *t* = 0.0001. At this *t*, about 0.8% of weights are fully switched to the secondary model during each pass. Model quality rapidly degrades above *t* = 0.0025:
- *t* = 0.0001 (~0.8% full swap): This model
- *t* = 0.0003 (~2% full swap)
- *t* = 0.001 (~10% full swap): [BoreanGale-70B](https://huggingface.co/alchemonaut/BoreanGale-70B)
- *t* = 0.0025 (~18% full swap): Generates one paragraph okay, but then reverts to garbage
- *t* = 0.005 (~35% full swap): Garbage; semi-related word lists
- *t* = 0.01 (~55% full swap): Garbage; pseudorandom tokens output
For QuartetAnemoi-70B-t0.0001, the three secondary models were each merged sequentially with *t* = 0.0001.
NearSwap implementation:
```
t: Union[float, np.ndarray],
v0: Union[np.ndarray, torch.Tensor],
v1: Union[np.ndarray, torch.Tensor],
...
lweight = numpy.absolute(v0-v1)
lweight = t / lweight
lweight = numpy.nan_to_num(lweight, nan=1.0, posinf=1.0, neginf=1.0)
numpy.clip(lweight, a_min=0.0, a_max=1.0, out=lweight)
res = lerp(lweight,v0,v1)
```
<br/>
<br/>
# License and Use
Since the ultimate origin of Miqu is at this time unknown beyond speculation, this model is for noncommercial research use only.
<br/>
<br/>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_alchemonaut__QuartetAnemoi-70B-t0.0001)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.86|
|AI2 Reasoning Challenge (25-Shot)|73.38|
|HellaSwag (10-Shot) |88.9|
|MMLU (5-Shot) |75.42|
|TruthfulQA (0-shot) |69.53|
|Winogrande (5-shot) |85.32|
|GSM8k (5-shot) |68.61|
|
zelk12/MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF | zelk12 | "2025-01-02T20:39:27Z" | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:zelk12/MT-Max-Merge_02012025163610-gemma-2-9B",
"base_model:quantized:zelk12/MT-Max-Merge_02012025163610-gemma-2-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-02T20:38:50Z" | ---
base_model: zelk12/MT-Max-Merge_02012025163610-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: gemma
pipeline_tag: text-generation
---
# zelk12/MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT-Max-Merge_02012025163610-gemma-2-9B`](https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT-Max-Merge_02012025163610-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF --hf-file mt-max-merge_02012025163610-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF --hf-file mt-max-merge_02012025163610-gemma-2-9b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF --hf-file mt-max-merge_02012025163610-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT-Max-Merge_02012025163610-gemma-2-9B-Q6_K-GGUF --hf-file mt-max-merge_02012025163610-gemma-2-9b-q6_k.gguf -c 2048
```
|
SzegedAI/100M_deberta-base_seed0_cp_100000 | SzegedAI | "2024-07-22T18:59:32Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-07-22T18:15:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
visdata/sn102 | visdata | "2024-12-18T04:32:59Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-18T04:26:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF | mradermacher | "2025-03-25T06:50:09Z" | 388 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"conversational",
"ja",
"base_model:rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b",
"base_model:quantized:rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-24T21:27:52Z" | ---
base_model: rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b
language:
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- qwen2
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-distill-qwen2.5-bakeneko-32b-GGUF/resolve/main/deepseek-r1-distill-qwen2.5-bakeneko-32b.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/BaeZel_V2-8B-LINEAR-GGUF | mradermacher | "2025-01-10T03:17:49Z" | 340 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DreadPoor/BaeZel_V2-8B-LINEAR",
"base_model:quantized:DreadPoor/BaeZel_V2-8B-LINEAR",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-09T05:30:56Z" | ---
base_model: DreadPoor/BaeZel_V2-8B-LINEAR
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/DreadPoor/BaeZel_V2-8B-LINEAR
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V2-8B-LINEAR-GGUF/resolve/main/BaeZel_V2-8B-LINEAR.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Highbrow/gemma-Code-Instruct-Finetune-test | Highbrow | "2024-02-29T10:18:38Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-29T09:49:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kazzx9921/deepseek-r1-distill-llama-8b-geekaz-gguf | Kazzx9921 | "2025-03-08T07:38:06Z" | 39 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-08T06:28:57Z" |
# deepseek-r1-distill-llama-8b-geekaz GGUF Models
This repository contains GGUF versions of the deepseek-r1-distill-llama-8b-geekaz model.
## Available Models
The following GGUF models are available:
- `deepseek-r1-distill-llama-8b-geekaz-f16.gguf` (15324.49 MB)
- `deepseek-r1-distill-llama-8b-geekaz-q8_0.gguf` (8145.12 MB)
- `deepseek-r1-distill-llama-8b-geekaz-q4_k_m.gguf` (4692.78 MB)
## Usage
You can use these models with [llama.cpp](https://github.com/ggerganov/llama.cpp).
Example usage:
```bash
./main -m path_to_model.gguf -n 1024
```
## 微調品質待測試
|
artificialguybr/coloringbook-redmond-2-1v-coloring-book-lora-for-freedomredmond-sd-2-1 | artificialguybr | "2023-11-23T16:10:36Z" | 18 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"coloring book",
"style",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:other",
"region:us"
] | text-to-image | "2023-11-23T16:10:35Z" | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- coloring book
- style
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: Coloring Book
widget:
- text: 'A cute owl, ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861082.jpeg
- text: 'A lion, ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861085.jpeg
- text: 'A fat cat, ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861091.jpeg
- text: 'A Super Hero ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861092.jpeg
- text: 'A stunning asian woman, portrait, close ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861097.jpeg
- text: 'A stunning asian woman, portrait, close ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861099.jpeg
- text: 'Mandala ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861104.jpeg
- text: 'Mandala ,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861106.jpeg
- text: 'A Ferrari,Coloring Book, ColoringBookAF,, '
output:
url: >-
3861110.jpeg
- text: 'A Pirate boy, Coloring Book, ColoringBookAF,, '
output:
url: >-
3861111.jpeg
---
# ColoringBook.Redmond 2.1V - Coloring Book Lora for FreedomRedmond SD 2.1
<Gallery />
## Model description
<h1 id="heading-28">ColoringBook.Redmond 2.1V For FreedomRedmond SD 2.1 is here!</h1><p>Introducing ColoringBook.Redmond 1.5V For Freedom Redmond SD 2.1, the ultimate LORA for creating Coloring Book images!</p><p>I'm grateful for the GPU time from <strong>Redmond.AI</strong> that allowed me to make this LORA! If you need GPU, then you need the great services from <a target="_blank" rel="ugc" href="http://Redmond.AI">Redmond.AI</a>.</p><p>Test all my Loras <a target="_blank" rel="ugc" href="https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora">here</a> for free and unlimited. Thanks, HF, for Inference API!</p><p><span style="color:rgb(210, 208, 206)">It is based on </span><strong><span style="color:rgb(210, 208, 206)">Freedom Redmond SD 2.1</span></strong><span style="color:rgb(210, 208, 206)"> and fine-tuned on a large dataset</span><strong><span style="color:rgb(210, 208, 206)">.</span></strong></p><p>The LORA has a high capacity to generate Coloring Book Images!</p><h3 id="heading-38"><strong><u>The tag for the model:ColoringBookAF</u></strong></h3><p>I really hope you like the LORA and use it.</p><p>If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.</p><p>Patreon:</p><p><a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=81570187">https://www.patreon.com/user?u=81570187</a></p><p>Ko-fi:<a target="_blank" rel="ugc" href="https://ko-fi.com/artificialguybr">https://ko-fi.com/artificialguybr</a></p><p>BuyMeACoffe:<a target="_blank" rel="ugc" href="https://www.buymeacoffee.com/jvkape">https://www.buymeacoffee.com/jvkape</a></p><p>Follow me in my twitter to know before all about new models:</p><p><a target="_blank" rel="ugc" href="https://twitter.com/artificialguybr/"><u>https://twitter.com/artificialguybr/</u></a></p>
## Trigger words
You should use `Coloring Book`, `ColoringBookAF` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/artificialguybr/coloringbook-redmond-2-1v-coloring-book-lora-for-freedomredmond-sd-2-1/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-2-1-base', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('artificialguybr/coloringbook-redmond-2-1v-coloring-book-lora-for-freedomredmond-sd-2-1', weight_name='ColoringBookRedmond21V-FreedomRedmond-ColoringBook-ColoringBookAF.safetensors')
image = pipeline('A Pirate boy, Coloring Book, ColoringBookAF,, ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mrm8488/ppo-CartPole-v1 | mrm8488 | "2022-01-27T15:13:48Z" | 0 | 1 | null | [
"region:us"
] | null | "2022-03-02T23:29:05Z" | #@title
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
---
# PPO CartPole v1 🤖⚖️
This is a pre-trained model of a PPO agent playing CartPole-v1 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
<video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/ppo-CartPole-v1/resolve/main/output.mp4"></video>
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="mrm8488/ppo-CartPole-v1", filename="cartpole-v1.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('CartPole-v1')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: mean_reward=500.00 +/- 0.0
|
sacasdcdacadcf/roberta-base_ag_news2 | sacasdcdacadcf | "2024-04-19T09:29:31Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-19T09:29:08Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base_ag_news2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_ag_news2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3506 | 1.0 | 375 | 0.3879 |
| 0.3511 | 2.0 | 750 | 0.3846 |
| 0.2484 | 3.0 | 1125 | 0.4752 |
| 0.1336 | 4.0 | 1500 | 0.4913 |
| 0.0565 | 5.0 | 1875 | 0.5226 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
fedovtt/df4b6aad-36b4-47bf-9869-cc9755ced4f6 | fedovtt | "2025-01-24T05:34:05Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-24T04:32:53Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df4b6aad-36b4-47bf-9869-cc9755ced4f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be25ce38282aeb5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be25ce38282aeb5a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/df4b6aad-36b4-47bf-9869-cc9755ced4f6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/be25ce38282aeb5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 14fba03c-c528-4737-ac1e-1f62f6edce20
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 14fba03c-c528-4737-ac1e-1f62f6edce20
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# df4b6aad-36b4-47bf-9869-cc9755ced4f6
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0003 | 10 | nan |
| 0.0 | 0.0005 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DivineJMd/recipeChatmodel | DivineJMd | "2024-08-23T14:52:37Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-23T14:47:24Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** DivineJMd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xxChrisYang/food_classifier | xxChrisYang | "2023-11-07T07:29:26Z" | 5 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-07T06:28:07Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: xxChrisYang/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xxChrisYang/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3677
- Validation Loss: 0.3606
- Train Accuracy: 0.904
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7467 | 1.6168 | 0.832 | 0 |
| 1.1704 | 0.7672 | 0.907 | 1 |
| 0.6836 | 0.5157 | 0.913 | 2 |
| 0.4500 | 0.4047 | 0.914 | 3 |
| 0.3677 | 0.3606 | 0.904 | 4 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Kevincp560/distilbart-cnn-12-6-finetuned-pubmed | Kevincp560 | "2022-03-06T22:33:03Z" | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:pub_med_summarization_dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-06T16:25:29Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 40.0985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-pubmed
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9895
- Rouge1: 40.0985
- Rouge2: 16.5016
- Rougel: 24.8319
- Rougelsum: 36.0775
- Gen Len: 141.884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.1709 | 1.0 | 4000 | 2.0257 | 38.1012 | 15.112 | 23.4064 | 33.9373 | 141.9195 |
| 1.9495 | 2.0 | 8000 | 1.9593 | 39.529 | 16.1693 | 24.487 | 35.5238 | 141.9785 |
| 1.756 | 3.0 | 12000 | 1.9488 | 39.9623 | 16.5799 | 24.949 | 35.9194 | 141.8855 |
| 1.6032 | 4.0 | 16000 | 1.9732 | 39.672 | 16.1994 | 24.5996 | 35.7021 | 141.921 |
| 1.4817 | 5.0 | 20000 | 1.9895 | 40.0985 | 16.5016 | 24.8319 | 36.0775 | 141.884 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
LucaLobefalo/bert-uncased | LucaLobefalo | "2023-06-04T09:23:48Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-06-04T07:23:21Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LucaLobefalo/bert-uncased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LucaLobefalo/bert-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5861
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16596, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2886 | 0 |
| 0.7925 | 1 |
| 0.5861 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jonatasgrosman/exp_w2v2t_de_vp-it_s962 | jonatasgrosman | "2022-07-10T12:46:24Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-10T12:45:54Z" | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_de_vp-it_s962
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
sb3/ppo-BeamRiderNoFrameskip-v4 | sb3 | "2022-10-11T15:12:02Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-02T12:58:20Z" | ---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 3819.20 +/- 1694.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
---
# **PPO** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env BeamRiderNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env BeamRiderNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
KatarLegacy/ChilloutMixss3 | KatarLegacy | "2023-05-29T13:05:33Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-29T13:04:41Z" | ---
license: creativeml-openrail-m
---
|
cleanrl/Assault-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3 | cleanrl | "2023-03-25T03:44:58Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Assault-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-25T03:44:56Z" | ---
tags:
- Assault-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Assault-v5
type: Assault-v5
metrics:
- type: mean_reward
value: 4388.60 +/- 3248.40
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Assault-v5**
This is a trained model of a PPO agent playing Assault-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Assault-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Assault-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Assault-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Assault-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Assault-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Assault-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
|
smile3634/ks-chungcheong-nmt-v2 | smile3634 | "2023-01-31T08:55:44Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-01-31T08:03:30Z" | ---
tags:
- generated_from_trainer
model-index:
- name: ks-chungcheong-nmt-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ks-chungcheong-nmt-v2
This model is a fine-tuned version of [smile3634/ks-chungcheong-nmt-v1](https://huggingface.co/smile3634/ks-chungcheong-nmt-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8849 | 0.16 | 500 | 1.8573 |
| 1.7337 | 0.32 | 1000 | 1.6565 |
| 1.5085 | 0.49 | 1500 | 1.2008 |
| 1.1849 | 0.65 | 2000 | 0.8869 |
| 0.9535 | 0.81 | 2500 | 0.6806 |
| 0.8032 | 0.97 | 3000 | 0.5774 |
| 0.5472 | 1.13 | 3500 | 0.5094 |
| 0.4656 | 1.29 | 4000 | 0.4570 |
| 0.4269 | 1.46 | 4500 | 0.4286 |
| 0.3916 | 1.62 | 5000 | 0.4011 |
| 0.3564 | 1.78 | 5500 | 0.3676 |
| 0.3259 | 1.94 | 6000 | 0.3422 |
| 0.2513 | 2.1 | 6500 | 0.3312 |
| 0.2101 | 2.26 | 7000 | 0.3111 |
| 0.1991 | 2.43 | 7500 | 0.2985 |
| 0.1893 | 2.59 | 8000 | 0.2960 |
| 0.1774 | 2.75 | 8500 | 0.2823 |
| 0.1687 | 2.91 | 9000 | 0.2771 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
SudiptoPramanik/TRL | SudiptoPramanik | "2023-11-26T08:20:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2-large",
"base_model:adapter:openai-community/gpt2-large",
"region:us"
] | null | "2023-11-25T11:40:00Z" | ---
library_name: peft
base_model: gpt2-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
mradermacher/Qwen2.5-Coder-7B-ccs-GGUF | mradermacher | "2024-11-18T03:54:18Z" | 34 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:0x404/Qwen2.5-Coder-7B-ccs",
"base_model:quantized:0x404/Qwen2.5-Coder-7B-ccs",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-18T01:46:41Z" | ---
base_model: 0x404/Qwen2.5-Coder-7B-ccs
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/0x404/Qwen2.5-Coder-7B-ccs
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-ccs-GGUF/resolve/main/Qwen2.5-Coder-7B-ccs.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
asun17904/multiberts-seed_0_stereoset_classifieronly | asun17904 | "2023-03-24T16:43:08Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:stereoset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-24T03:47:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- stereoset
metrics:
- accuracy
model-index:
- name: multiberts-seed_0_stereoset_classifieronly
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: stereoset
type: stereoset
config: intersentence
split: validation
args: intersentence
metrics:
- name: Accuracy
type: accuracy
value: 0.5855572998430141
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiberts-seed_0_stereoset_classifieronly
This model is a fine-tuned version of [google/multiberts-seed_0](https://huggingface.co/google/multiberts-seed_0) on the stereoset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6824
- Accuracy: 0.5856
- Tp: 0.3414
- Tn: 0.2441
- Fp: 0.2316
- Fn: 0.1829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|
| 0.7177 | 0.43 | 20 | 0.7071 | 0.4584 | 0.1028 | 0.3556 | 0.1201 | 0.4215 |
| 0.7096 | 0.85 | 40 | 0.6953 | 0.4867 | 0.3234 | 0.1633 | 0.3124 | 0.2009 |
| 0.7126 | 1.28 | 60 | 0.6988 | 0.4702 | 0.0573 | 0.4129 | 0.0628 | 0.4670 |
| 0.7012 | 1.7 | 80 | 0.6919 | 0.5141 | 0.2794 | 0.2347 | 0.2410 | 0.2449 |
| 0.7016 | 2.13 | 100 | 0.6881 | 0.5471 | 0.4466 | 0.1005 | 0.3752 | 0.0777 |
| 0.7027 | 2.55 | 120 | 0.6913 | 0.5204 | 0.2520 | 0.2684 | 0.2072 | 0.2724 |
| 0.6918 | 2.98 | 140 | 0.6894 | 0.5526 | 0.3564 | 0.1962 | 0.2794 | 0.1680 |
| 0.697 | 3.4 | 160 | 0.6886 | 0.5628 | 0.3807 | 0.1821 | 0.2936 | 0.1436 |
| 0.696 | 3.83 | 180 | 0.6876 | 0.5526 | 0.4066 | 0.1460 | 0.3297 | 0.1177 |
| 0.7047 | 4.26 | 200 | 0.6936 | 0.4984 | 0.1099 | 0.3885 | 0.0871 | 0.4144 |
| 0.6945 | 4.68 | 220 | 0.6884 | 0.5628 | 0.3713 | 0.1915 | 0.2841 | 0.1531 |
| 0.7051 | 5.11 | 240 | 0.6893 | 0.5518 | 0.3140 | 0.2378 | 0.2378 | 0.2104 |
| 0.6889 | 5.53 | 260 | 0.6869 | 0.5581 | 0.3901 | 0.1680 | 0.3077 | 0.1342 |
| 0.7033 | 5.96 | 280 | 0.6872 | 0.5612 | 0.3799 | 0.1813 | 0.2943 | 0.1444 |
| 0.7039 | 6.38 | 300 | 0.6904 | 0.5330 | 0.2096 | 0.3234 | 0.1523 | 0.3148 |
| 0.6945 | 6.81 | 320 | 0.6861 | 0.5573 | 0.4105 | 0.1468 | 0.3289 | 0.1138 |
| 0.6969 | 7.23 | 340 | 0.6899 | 0.5526 | 0.2575 | 0.2951 | 0.1805 | 0.2669 |
| 0.6951 | 7.66 | 360 | 0.6859 | 0.5573 | 0.4105 | 0.1468 | 0.3289 | 0.1138 |
| 0.6901 | 8.09 | 380 | 0.6903 | 0.5377 | 0.2057 | 0.3320 | 0.1436 | 0.3187 |
| 0.6839 | 8.51 | 400 | 0.6865 | 0.5644 | 0.3870 | 0.1774 | 0.2983 | 0.1374 |
| 0.6965 | 8.94 | 420 | 0.6875 | 0.5683 | 0.3391 | 0.2292 | 0.2465 | 0.1852 |
| 0.6887 | 9.36 | 440 | 0.6869 | 0.5667 | 0.3257 | 0.2410 | 0.2347 | 0.1986 |
| 0.6945 | 9.79 | 460 | 0.6852 | 0.5581 | 0.3846 | 0.1735 | 0.3022 | 0.1397 |
| 0.6864 | 10.21 | 480 | 0.6861 | 0.5659 | 0.3509 | 0.2151 | 0.2606 | 0.1735 |
| 0.6935 | 10.64 | 500 | 0.6876 | 0.5628 | 0.2794 | 0.2834 | 0.1923 | 0.2449 |
| 0.6981 | 11.06 | 520 | 0.6865 | 0.5699 | 0.3250 | 0.2449 | 0.2308 | 0.1994 |
| 0.7011 | 11.49 | 540 | 0.6874 | 0.5628 | 0.2755 | 0.2873 | 0.1884 | 0.2488 |
| 0.6833 | 11.91 | 560 | 0.6842 | 0.5573 | 0.4035 | 0.1538 | 0.3218 | 0.1209 |
| 0.692 | 12.34 | 580 | 0.6913 | 0.5220 | 0.1350 | 0.3870 | 0.0887 | 0.3893 |
| 0.6902 | 12.77 | 600 | 0.6855 | 0.5683 | 0.3713 | 0.1970 | 0.2786 | 0.1531 |
| 0.6905 | 13.19 | 620 | 0.6853 | 0.5699 | 0.3736 | 0.1962 | 0.2794 | 0.1507 |
| 0.6866 | 13.62 | 640 | 0.6872 | 0.5683 | 0.2841 | 0.2841 | 0.1915 | 0.2402 |
| 0.7 | 14.04 | 660 | 0.6853 | 0.5714 | 0.3587 | 0.2127 | 0.2630 | 0.1656 |
| 0.6927 | 14.47 | 680 | 0.6869 | 0.5683 | 0.2684 | 0.2998 | 0.1758 | 0.2559 |
| 0.6891 | 14.89 | 700 | 0.6854 | 0.5683 | 0.3344 | 0.2339 | 0.2418 | 0.1900 |
| 0.684 | 15.32 | 720 | 0.6867 | 0.5691 | 0.2708 | 0.2983 | 0.1774 | 0.2535 |
| 0.6969 | 15.74 | 740 | 0.6842 | 0.5691 | 0.3854 | 0.1837 | 0.2920 | 0.1389 |
| 0.6782 | 16.17 | 760 | 0.6841 | 0.5620 | 0.3972 | 0.1648 | 0.3108 | 0.1272 |
| 0.7023 | 16.6 | 780 | 0.6868 | 0.5777 | 0.3046 | 0.2732 | 0.2025 | 0.2198 |
| 0.6979 | 17.02 | 800 | 0.6841 | 0.5722 | 0.3823 | 0.1900 | 0.2857 | 0.1421 |
| 0.6875 | 17.45 | 820 | 0.6840 | 0.5691 | 0.3846 | 0.1845 | 0.2912 | 0.1397 |
| 0.6852 | 17.87 | 840 | 0.6867 | 0.5675 | 0.2598 | 0.3077 | 0.1680 | 0.2645 |
| 0.688 | 18.3 | 860 | 0.6850 | 0.5691 | 0.3195 | 0.2496 | 0.2261 | 0.2049 |
| 0.6941 | 18.72 | 880 | 0.6858 | 0.5754 | 0.2834 | 0.2920 | 0.1837 | 0.2410 |
| 0.6942 | 19.15 | 900 | 0.6828 | 0.5667 | 0.4215 | 0.1452 | 0.3305 | 0.1028 |
| 0.6883 | 19.57 | 920 | 0.6842 | 0.5699 | 0.3438 | 0.2261 | 0.2496 | 0.1805 |
| 0.6942 | 20.0 | 940 | 0.6858 | 0.5722 | 0.2677 | 0.3046 | 0.1711 | 0.2567 |
| 0.6908 | 20.43 | 960 | 0.6827 | 0.5699 | 0.4027 | 0.1672 | 0.3085 | 0.1217 |
| 0.6857 | 20.85 | 980 | 0.6849 | 0.5777 | 0.3014 | 0.2763 | 0.1994 | 0.2229 |
| 0.7046 | 21.28 | 1000 | 0.6836 | 0.5761 | 0.3587 | 0.2174 | 0.2582 | 0.1656 |
| 0.6856 | 21.7 | 1020 | 0.6832 | 0.5691 | 0.3807 | 0.1884 | 0.2873 | 0.1436 |
| 0.6969 | 22.13 | 1040 | 0.6878 | 0.5447 | 0.1978 | 0.3469 | 0.1287 | 0.3265 |
| 0.6957 | 22.55 | 1060 | 0.6854 | 0.5769 | 0.2991 | 0.2779 | 0.1978 | 0.2253 |
| 0.6903 | 22.98 | 1080 | 0.6842 | 0.5761 | 0.3375 | 0.2386 | 0.2370 | 0.1868 |
| 0.6923 | 23.4 | 1100 | 0.6869 | 0.5683 | 0.2347 | 0.3336 | 0.1421 | 0.2896 |
| 0.7005 | 23.83 | 1120 | 0.6852 | 0.5801 | 0.3061 | 0.2739 | 0.2017 | 0.2182 |
| 0.6918 | 24.26 | 1140 | 0.6828 | 0.5722 | 0.3807 | 0.1915 | 0.2841 | 0.1436 |
| 0.701 | 24.68 | 1160 | 0.6839 | 0.5801 | 0.3367 | 0.2433 | 0.2323 | 0.1876 |
| 0.6947 | 25.11 | 1180 | 0.6831 | 0.5722 | 0.3689 | 0.2033 | 0.2724 | 0.1554 |
| 0.6941 | 25.53 | 1200 | 0.6833 | 0.5754 | 0.3571 | 0.2182 | 0.2575 | 0.1672 |
| 0.6877 | 25.96 | 1220 | 0.6836 | 0.5808 | 0.3446 | 0.2363 | 0.2394 | 0.1797 |
| 0.6891 | 26.38 | 1240 | 0.6829 | 0.5706 | 0.3673 | 0.2033 | 0.2724 | 0.1570 |
| 0.6954 | 26.81 | 1260 | 0.6834 | 0.5769 | 0.3509 | 0.2261 | 0.2496 | 0.1735 |
| 0.6854 | 27.23 | 1280 | 0.6845 | 0.5769 | 0.3140 | 0.2630 | 0.2127 | 0.2104 |
| 0.6829 | 27.66 | 1300 | 0.6866 | 0.5581 | 0.2166 | 0.3414 | 0.1342 | 0.3077 |
| 0.6936 | 28.09 | 1320 | 0.6826 | 0.5746 | 0.3768 | 0.1978 | 0.2779 | 0.1476 |
| 0.6808 | 28.51 | 1340 | 0.6831 | 0.5777 | 0.3548 | 0.2229 | 0.2527 | 0.1695 |
| 0.6909 | 28.94 | 1360 | 0.6836 | 0.5832 | 0.3375 | 0.2457 | 0.2300 | 0.1868 |
| 0.6863 | 29.36 | 1380 | 0.6835 | 0.5793 | 0.3430 | 0.2363 | 0.2394 | 0.1813 |
| 0.6897 | 29.79 | 1400 | 0.6825 | 0.5746 | 0.3783 | 0.1962 | 0.2794 | 0.1460 |
| 0.6889 | 30.21 | 1420 | 0.6838 | 0.5785 | 0.3273 | 0.2512 | 0.2245 | 0.1970 |
| 0.6917 | 30.64 | 1440 | 0.6828 | 0.5746 | 0.3619 | 0.2127 | 0.2630 | 0.1625 |
| 0.6953 | 31.06 | 1460 | 0.6849 | 0.5769 | 0.2786 | 0.2983 | 0.1774 | 0.2457 |
| 0.6819 | 31.49 | 1480 | 0.6868 | 0.5526 | 0.1868 | 0.3658 | 0.1099 | 0.3375 |
| 0.6915 | 31.91 | 1500 | 0.6830 | 0.5808 | 0.3383 | 0.2425 | 0.2331 | 0.1860 |
| 0.6968 | 32.34 | 1520 | 0.6815 | 0.5793 | 0.3987 | 0.1805 | 0.2951 | 0.1256 |
| 0.6816 | 32.77 | 1540 | 0.6824 | 0.5808 | 0.3587 | 0.2221 | 0.2535 | 0.1656 |
| 0.695 | 33.19 | 1560 | 0.6839 | 0.5793 | 0.2991 | 0.2802 | 0.1954 | 0.2253 |
| 0.6784 | 33.62 | 1580 | 0.6847 | 0.5801 | 0.2684 | 0.3116 | 0.1641 | 0.2559 |
| 0.688 | 34.04 | 1600 | 0.6825 | 0.5793 | 0.3548 | 0.2245 | 0.2512 | 0.1695 |
| 0.6872 | 34.47 | 1620 | 0.6835 | 0.5808 | 0.3132 | 0.2677 | 0.2080 | 0.2111 |
| 0.6975 | 34.89 | 1640 | 0.6828 | 0.5808 | 0.3469 | 0.2339 | 0.2418 | 0.1774 |
| 0.6889 | 35.32 | 1660 | 0.6837 | 0.5824 | 0.3124 | 0.2700 | 0.2057 | 0.2119 |
| 0.6873 | 35.74 | 1680 | 0.6825 | 0.5785 | 0.3611 | 0.2174 | 0.2582 | 0.1633 |
| 0.6938 | 36.17 | 1700 | 0.6825 | 0.5777 | 0.3611 | 0.2166 | 0.2590 | 0.1633 |
| 0.7051 | 36.6 | 1720 | 0.6829 | 0.5816 | 0.3422 | 0.2394 | 0.2363 | 0.1821 |
| 0.6894 | 37.02 | 1740 | 0.6822 | 0.5824 | 0.3626 | 0.2198 | 0.2559 | 0.1617 |
| 0.6987 | 37.45 | 1760 | 0.6828 | 0.5856 | 0.3414 | 0.2441 | 0.2316 | 0.1829 |
| 0.6916 | 37.87 | 1780 | 0.6835 | 0.5777 | 0.3061 | 0.2716 | 0.2041 | 0.2182 |
| 0.6835 | 38.3 | 1800 | 0.6830 | 0.5816 | 0.3234 | 0.2582 | 0.2174 | 0.2009 |
| 0.6866 | 38.72 | 1820 | 0.6832 | 0.5863 | 0.3203 | 0.2661 | 0.2096 | 0.2041 |
| 0.6856 | 39.15 | 1840 | 0.6829 | 0.5848 | 0.3320 | 0.2527 | 0.2229 | 0.1923 |
| 0.6884 | 39.57 | 1860 | 0.6821 | 0.5816 | 0.3595 | 0.2221 | 0.2535 | 0.1648 |
| 0.6833 | 40.0 | 1880 | 0.6828 | 0.5863 | 0.3352 | 0.2512 | 0.2245 | 0.1892 |
| 0.6805 | 40.43 | 1900 | 0.6826 | 0.5840 | 0.3407 | 0.2433 | 0.2323 | 0.1837 |
| 0.6941 | 40.85 | 1920 | 0.6817 | 0.5754 | 0.3681 | 0.2072 | 0.2684 | 0.1562 |
| 0.6902 | 41.28 | 1940 | 0.6821 | 0.5816 | 0.3532 | 0.2284 | 0.2473 | 0.1711 |
| 0.692 | 41.7 | 1960 | 0.6826 | 0.5863 | 0.3383 | 0.2480 | 0.2276 | 0.1860 |
| 0.6927 | 42.13 | 1980 | 0.6824 | 0.5848 | 0.3454 | 0.2394 | 0.2363 | 0.1790 |
| 0.6849 | 42.55 | 2000 | 0.6822 | 0.5793 | 0.3501 | 0.2292 | 0.2465 | 0.1743 |
| 0.6836 | 42.98 | 2020 | 0.6821 | 0.5801 | 0.3540 | 0.2261 | 0.2496 | 0.1703 |
| 0.6916 | 43.4 | 2040 | 0.6822 | 0.5824 | 0.3477 | 0.2347 | 0.2410 | 0.1766 |
| 0.6825 | 43.83 | 2060 | 0.6824 | 0.5832 | 0.3446 | 0.2386 | 0.2370 | 0.1797 |
| 0.6939 | 44.26 | 2080 | 0.6825 | 0.5863 | 0.3383 | 0.2480 | 0.2276 | 0.1860 |
| 0.6899 | 44.68 | 2100 | 0.6820 | 0.5801 | 0.3509 | 0.2292 | 0.2465 | 0.1735 |
| 0.6873 | 45.11 | 2120 | 0.6819 | 0.5801 | 0.3587 | 0.2214 | 0.2543 | 0.1656 |
| 0.696 | 45.53 | 2140 | 0.6820 | 0.5801 | 0.3564 | 0.2237 | 0.2520 | 0.1680 |
| 0.697 | 45.96 | 2160 | 0.6824 | 0.5856 | 0.3485 | 0.2370 | 0.2386 | 0.1758 |
| 0.6891 | 46.38 | 2180 | 0.6825 | 0.5848 | 0.3430 | 0.2418 | 0.2339 | 0.1813 |
| 0.6828 | 46.81 | 2200 | 0.6822 | 0.5816 | 0.3501 | 0.2316 | 0.2441 | 0.1743 |
| 0.6904 | 47.23 | 2220 | 0.6823 | 0.5848 | 0.3477 | 0.2370 | 0.2386 | 0.1766 |
| 0.6891 | 47.66 | 2240 | 0.6825 | 0.5863 | 0.3391 | 0.2473 | 0.2284 | 0.1852 |
| 0.6867 | 48.09 | 2260 | 0.6826 | 0.5871 | 0.3383 | 0.2488 | 0.2268 | 0.1860 |
| 0.688 | 48.51 | 2280 | 0.6824 | 0.5832 | 0.3430 | 0.2402 | 0.2355 | 0.1813 |
| 0.6938 | 48.94 | 2300 | 0.6824 | 0.5856 | 0.3399 | 0.2457 | 0.2300 | 0.1845 |
| 0.6823 | 49.36 | 2320 | 0.6824 | 0.5863 | 0.3407 | 0.2457 | 0.2300 | 0.1837 |
| 0.6886 | 49.79 | 2340 | 0.6824 | 0.5856 | 0.3414 | 0.2441 | 0.2316 | 0.1829 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
DVC12/NoPunIntended_DeBERTa | DVC12 | "2024-04-22T20:11:27Z" | 167 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-14T01:32:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yjwon/mp_mistral7bv3_sft_dpo_beta1e-1_epoch3 | yjwon | "2024-11-06T01:40:18Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-06T01:36:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso05/4d0c3528-ce7a-44e2-8fdd-1cae98fa9587 | lesso05 | "2025-01-13T14:16:23Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-13T12:58:34Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4d0c3528-ce7a-44e2-8fdd-1cae98fa9587
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: true
chat_template: llama3
datasets:
- data_files:
- debb7971d10226d0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/debb7971d10226d0_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/4d0c3528-ce7a-44e2-8fdd-1cae98fa9587
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/debb7971d10226d0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ca944e92-ac96-4f24-a7a9-1bbc577c96c5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ca944e92-ac96-4f24-a7a9-1bbc577c96c5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4d0c3528-ce7a-44e2-8fdd-1cae98fa9587
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8494 | 0.0002 | 1 | 2.5698 |
| 2.2498 | 0.0009 | 5 | 2.3741 |
| 1.9533 | 0.0018 | 10 | 1.7097 |
| 1.4252 | 0.0027 | 15 | 1.5894 |
| 1.0872 | 0.0036 | 20 | 1.5562 |
| 1.7301 | 0.0045 | 25 | 1.5385 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits