modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 12:29:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 12:24:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
YYYYYYibo/nash_simple_online_iter_2 | YYYYYYibo | 2024-05-20T20:28:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:adapter:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T18:41:47Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: alignment-handbook/zephyr-7b-sft-full
datasets:
- updated
- original
model-index:
- name: nash_simple_online_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nash_simple_online_iter_2
This model is a fine-tuned version of [YYYYYYibo/nash_simple_online_iter_1](https://huggingface.co/YYYYYYibo/nash_simple_online_iter_1) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6782
- Rewards/chosen: -0.0686
- Rewards/rejected: -0.0971
- Rewards/accuracies: 0.6100
- Rewards/margins: 0.0284
- Logps/rejected: -268.4390
- Logps/chosen: -288.5429
- Logits/rejected: -2.5385
- Logits/chosen: -2.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6886 | 0.64 | 100 | 0.6782 | -0.0686 | -0.0971 | 0.6100 | 0.0284 | -268.4390 | -288.5429 | -2.5385 | -2.6271 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
BilalMuftuoglu/beit-base-patch16-224-85-fold2 | BilalMuftuoglu | 2024-05-20T20:28:16Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T20:07:39Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-85-fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9318181818181818
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-85-fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- Accuracy: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6057 | 0.7273 |
| No log | 2.0 | 4 | 0.6639 | 0.7045 |
| No log | 3.0 | 6 | 0.7324 | 0.7045 |
| No log | 4.0 | 8 | 0.5213 | 0.7273 |
| 0.5701 | 5.0 | 10 | 0.4717 | 0.8182 |
| 0.5701 | 6.0 | 12 | 0.5339 | 0.7045 |
| 0.5701 | 7.0 | 14 | 0.4959 | 0.7273 |
| 0.5701 | 8.0 | 16 | 0.4086 | 0.8409 |
| 0.5701 | 9.0 | 18 | 0.4039 | 0.8182 |
| 0.4248 | 10.0 | 20 | 0.4106 | 0.8182 |
| 0.4248 | 11.0 | 22 | 0.4108 | 0.8409 |
| 0.4248 | 12.0 | 24 | 0.4607 | 0.7727 |
| 0.4248 | 13.0 | 26 | 0.4446 | 0.7727 |
| 0.4248 | 14.0 | 28 | 0.3912 | 0.8409 |
| 0.3579 | 15.0 | 30 | 0.5183 | 0.7727 |
| 0.3579 | 16.0 | 32 | 0.2991 | 0.8864 |
| 0.3579 | 17.0 | 34 | 0.3587 | 0.8182 |
| 0.3579 | 18.0 | 36 | 0.3110 | 0.8182 |
| 0.3579 | 19.0 | 38 | 0.3084 | 0.8636 |
| 0.2838 | 20.0 | 40 | 0.3079 | 0.8864 |
| 0.2838 | 21.0 | 42 | 0.3033 | 0.8409 |
| 0.2838 | 22.0 | 44 | 0.3126 | 0.8409 |
| 0.2838 | 23.0 | 46 | 0.3171 | 0.8864 |
| 0.2838 | 24.0 | 48 | 0.2689 | 0.8636 |
| 0.2705 | 25.0 | 50 | 0.3175 | 0.8409 |
| 0.2705 | 26.0 | 52 | 0.3464 | 0.8409 |
| 0.2705 | 27.0 | 54 | 0.3092 | 0.8636 |
| 0.2705 | 28.0 | 56 | 0.3178 | 0.8636 |
| 0.2705 | 29.0 | 58 | 0.4107 | 0.7955 |
| 0.1887 | 30.0 | 60 | 0.4151 | 0.8182 |
| 0.1887 | 31.0 | 62 | 0.5450 | 0.7955 |
| 0.1887 | 32.0 | 64 | 0.2892 | 0.8409 |
| 0.1887 | 33.0 | 66 | 0.4078 | 0.8409 |
| 0.1887 | 34.0 | 68 | 0.2821 | 0.8636 |
| 0.1692 | 35.0 | 70 | 0.2708 | 0.8636 |
| 0.1692 | 36.0 | 72 | 0.2692 | 0.8864 |
| 0.1692 | 37.0 | 74 | 0.2806 | 0.8864 |
| 0.1692 | 38.0 | 76 | 0.4613 | 0.8182 |
| 0.1692 | 39.0 | 78 | 0.2887 | 0.9091 |
| 0.1623 | 40.0 | 80 | 0.4046 | 0.8409 |
| 0.1623 | 41.0 | 82 | 0.4542 | 0.8409 |
| 0.1623 | 42.0 | 84 | 0.3010 | 0.8636 |
| 0.1623 | 43.0 | 86 | 0.2954 | 0.8636 |
| 0.1623 | 44.0 | 88 | 0.2838 | 0.8864 |
| 0.1522 | 45.0 | 90 | 0.2675 | 0.8864 |
| 0.1522 | 46.0 | 92 | 0.2517 | 0.9091 |
| 0.1522 | 47.0 | 94 | 0.2687 | 0.9091 |
| 0.1522 | 48.0 | 96 | 0.2551 | 0.9091 |
| 0.1522 | 49.0 | 98 | 0.2661 | 0.8864 |
| 0.1379 | 50.0 | 100 | 0.3507 | 0.8182 |
| 0.1379 | 51.0 | 102 | 0.2629 | 0.8864 |
| 0.1379 | 52.0 | 104 | 0.2697 | 0.8864 |
| 0.1379 | 53.0 | 106 | 0.3081 | 0.8636 |
| 0.1379 | 54.0 | 108 | 0.3851 | 0.8409 |
| 0.1283 | 55.0 | 110 | 0.3104 | 0.8636 |
| 0.1283 | 56.0 | 112 | 0.3624 | 0.8864 |
| 0.1283 | 57.0 | 114 | 0.3199 | 0.8864 |
| 0.1283 | 58.0 | 116 | 0.4964 | 0.8182 |
| 0.1283 | 59.0 | 118 | 0.3356 | 0.8864 |
| 0.1335 | 60.0 | 120 | 0.2314 | 0.9091 |
| 0.1335 | 61.0 | 122 | 0.2334 | 0.9091 |
| 0.1335 | 62.0 | 124 | 0.3961 | 0.8636 |
| 0.1335 | 63.0 | 126 | 0.3453 | 0.8636 |
| 0.1335 | 64.0 | 128 | 0.2806 | 0.8636 |
| 0.1353 | 65.0 | 130 | 0.3372 | 0.8636 |
| 0.1353 | 66.0 | 132 | 0.2675 | 0.8864 |
| 0.1353 | 67.0 | 134 | 0.3482 | 0.8864 |
| 0.1353 | 68.0 | 136 | 0.3725 | 0.8636 |
| 0.1353 | 69.0 | 138 | 0.3769 | 0.8636 |
| 0.099 | 70.0 | 140 | 0.5170 | 0.8409 |
| 0.099 | 71.0 | 142 | 0.4710 | 0.8636 |
| 0.099 | 72.0 | 144 | 0.3266 | 0.9091 |
| 0.099 | 73.0 | 146 | 0.3390 | 0.8636 |
| 0.099 | 74.0 | 148 | 0.3051 | 0.8636 |
| 0.1179 | 75.0 | 150 | 0.3030 | 0.9091 |
| 0.1179 | 76.0 | 152 | 0.3208 | 0.9091 |
| 0.1179 | 77.0 | 154 | 0.2954 | 0.9091 |
| 0.1179 | 78.0 | 156 | 0.2777 | 0.9091 |
| 0.1179 | 79.0 | 158 | 0.2763 | 0.9318 |
| 0.1077 | 80.0 | 160 | 0.3059 | 0.9091 |
| 0.1077 | 81.0 | 162 | 0.3445 | 0.8864 |
| 0.1077 | 82.0 | 164 | 0.3239 | 0.9091 |
| 0.1077 | 83.0 | 166 | 0.3175 | 0.9091 |
| 0.1077 | 84.0 | 168 | 0.3214 | 0.9091 |
| 0.0907 | 85.0 | 170 | 0.3313 | 0.9091 |
| 0.0907 | 86.0 | 172 | 0.3492 | 0.9091 |
| 0.0907 | 87.0 | 174 | 0.3644 | 0.9091 |
| 0.0907 | 88.0 | 176 | 0.3637 | 0.9091 |
| 0.0907 | 89.0 | 178 | 0.3750 | 0.9091 |
| 0.0972 | 90.0 | 180 | 0.3845 | 0.9091 |
| 0.0972 | 91.0 | 182 | 0.3749 | 0.9091 |
| 0.0972 | 92.0 | 184 | 0.3721 | 0.8864 |
| 0.0972 | 93.0 | 186 | 0.3680 | 0.8864 |
| 0.0972 | 94.0 | 188 | 0.3634 | 0.8864 |
| 0.0733 | 95.0 | 190 | 0.3565 | 0.9091 |
| 0.0733 | 96.0 | 192 | 0.3519 | 0.9091 |
| 0.0733 | 97.0 | 194 | 0.3529 | 0.9091 |
| 0.0733 | 98.0 | 196 | 0.3536 | 0.9091 |
| 0.0733 | 99.0 | 198 | 0.3561 | 0.9091 |
| 0.079 | 100.0 | 200 | 0.3565 | 0.9091 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LoneStriker/Yi-1.5-34B-32K-5.0bpw-h6-exl2 | LoneStriker | 2024-05-20T20:26:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:17:26Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
alexx1/llama3-omegle-lora-r128-adapter | alexx1 | 2024-05-20T20:17:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T20:16:27Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** alexx1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LoneStriker/Yi-1.5-34B-32K-4.65bpw-h6-exl2 | LoneStriker | 2024-05-20T20:17:22Z | 13 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:08:51Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
alexx1/llama3-omegle-lora-r128-16bit | alexx1 | 2024-05-20T20:16:20Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T20:13:23Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** alexx1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
seregadgl101/baii_v12_12ep | seregadgl101 | 2024-05-20T20:15:00Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-20T20:12:33Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# seregadgl101/baii_v12_12ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_v12_12ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_v12_12ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
thirdai/NamedEntityRecognition | thirdai | 2024-05-20T20:11:17Z | 0 | 0 | null | [
"token-classification",
"region:us"
] | token-classification | 2024-05-20T19:58:14Z | ---
pipeline_tag: token-classification
--- |
cobrakenji/granite-20b-code-base-GGUF | cobrakenji | 2024-05-20T20:10:19Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"gpt_bigcode",
"text-generation",
"code",
"granite",
"dataset:codeparrot/github-code-clean",
"dataset:bigcode/starcoderdata",
"dataset:open-web-math/open-web-math",
"dataset:math-ai/StackMathQA",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T16:24:07Z | ---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
- open-web-math/open-web-math
- math-ai/StackMathQA
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version | Phase 2
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-base
results:
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 43.8
veriefied: false
- task:
type: text-generation
dataset:
type: evalplus/mbppplus
name: MBPP+
metrics:
- name: pass@1
type: pass@1
value: 51.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 48.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 50.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 59.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 40.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 35.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 17.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 10.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 25.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false # Check
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 14.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 26.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 15.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 3.0
veriefied: false
---
### Description:
This is forked from IBM's [`granite-20b-code-base-GGUF`](https://huggingface.co/ibm-granite/granite-20b-code-base-GGUF) - commit [`d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44`](https://huggingface.co/ibm-granite/granite-20b-code-base-GGUF/commit/d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44).
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-20b-code-base) for more details.
## Use with llama.cpp
```shell
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# install
make
# run generation
./main -m granite-20b-code-base-GGUF/granite-20b-code-base.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
|
OwOpeepeepoopoo/DancingElaine5 | OwOpeepeepoopoo | 2024-05-20T20:09:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T12:09:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/MONT_D_ORbis | matthieuzone | 2024-05-20T20:08:29Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:00:18Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MONT_D_ORbis
<Gallery />
## Model description
These are matthieuzone/MONT_D_ORbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MONT_D_ORbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
BilalMuftuoglu/beit-base-patch16-224-85-fold1 | BilalMuftuoglu | 2024-05-20T20:07:31Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T19:46:35Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-85-fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9772727272727273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-85-fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1430
- Accuracy: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.7308 | 0.5455 |
| No log | 2.0 | 4 | 0.7927 | 0.7045 |
| No log | 3.0 | 6 | 0.9672 | 0.7045 |
| No log | 4.0 | 8 | 0.6257 | 0.7045 |
| 0.6404 | 5.0 | 10 | 0.4646 | 0.7955 |
| 0.6404 | 6.0 | 12 | 0.5648 | 0.7045 |
| 0.6404 | 7.0 | 14 | 0.4389 | 0.7727 |
| 0.6404 | 8.0 | 16 | 0.4523 | 0.75 |
| 0.6404 | 9.0 | 18 | 0.4698 | 0.75 |
| 0.455 | 10.0 | 20 | 0.3707 | 0.8409 |
| 0.455 | 11.0 | 22 | 0.3594 | 0.8182 |
| 0.455 | 12.0 | 24 | 0.6136 | 0.7273 |
| 0.455 | 13.0 | 26 | 0.3022 | 0.8864 |
| 0.455 | 14.0 | 28 | 0.2919 | 0.8409 |
| 0.3981 | 15.0 | 30 | 0.3612 | 0.8182 |
| 0.3981 | 16.0 | 32 | 0.2492 | 0.8864 |
| 0.3981 | 17.0 | 34 | 0.2460 | 0.9091 |
| 0.3981 | 18.0 | 36 | 0.2931 | 0.8636 |
| 0.3981 | 19.0 | 38 | 0.1822 | 0.9091 |
| 0.3257 | 20.0 | 40 | 0.2060 | 0.9091 |
| 0.3257 | 21.0 | 42 | 0.2195 | 0.8864 |
| 0.3257 | 22.0 | 44 | 0.2624 | 0.9091 |
| 0.3257 | 23.0 | 46 | 0.2384 | 0.9091 |
| 0.3257 | 24.0 | 48 | 0.1767 | 0.9318 |
| 0.2553 | 25.0 | 50 | 0.2040 | 0.9318 |
| 0.2553 | 26.0 | 52 | 0.1981 | 0.9091 |
| 0.2553 | 27.0 | 54 | 0.1835 | 0.9318 |
| 0.2553 | 28.0 | 56 | 0.1820 | 0.9318 |
| 0.2553 | 29.0 | 58 | 0.1466 | 0.9545 |
| 0.2083 | 30.0 | 60 | 0.1668 | 0.9318 |
| 0.2083 | 31.0 | 62 | 0.2229 | 0.9318 |
| 0.2083 | 32.0 | 64 | 0.1783 | 0.9545 |
| 0.2083 | 33.0 | 66 | 0.1944 | 0.8864 |
| 0.2083 | 34.0 | 68 | 0.3025 | 0.9091 |
| 0.2353 | 35.0 | 70 | 0.4457 | 0.8409 |
| 0.2353 | 36.0 | 72 | 0.2759 | 0.9318 |
| 0.2353 | 37.0 | 74 | 0.2179 | 0.9318 |
| 0.2353 | 38.0 | 76 | 0.3911 | 0.9091 |
| 0.2353 | 39.0 | 78 | 0.5785 | 0.8409 |
| 0.1782 | 40.0 | 80 | 0.2339 | 0.9318 |
| 0.1782 | 41.0 | 82 | 0.2302 | 0.9091 |
| 0.1782 | 42.0 | 84 | 0.3967 | 0.8864 |
| 0.1782 | 43.0 | 86 | 0.4447 | 0.8636 |
| 0.1782 | 44.0 | 88 | 0.2020 | 0.9091 |
| 0.2059 | 45.0 | 90 | 0.1911 | 0.9318 |
| 0.2059 | 46.0 | 92 | 0.2609 | 0.9091 |
| 0.2059 | 47.0 | 94 | 0.2925 | 0.9091 |
| 0.2059 | 48.0 | 96 | 0.2079 | 0.9318 |
| 0.2059 | 49.0 | 98 | 0.1853 | 0.9318 |
| 0.1706 | 50.0 | 100 | 0.2860 | 0.9318 |
| 0.1706 | 51.0 | 102 | 0.3735 | 0.8636 |
| 0.1706 | 52.0 | 104 | 0.1968 | 0.9318 |
| 0.1706 | 53.0 | 106 | 0.1722 | 0.9318 |
| 0.1706 | 54.0 | 108 | 0.3123 | 0.8636 |
| 0.1429 | 55.0 | 110 | 0.3297 | 0.8864 |
| 0.1429 | 56.0 | 112 | 0.1430 | 0.9773 |
| 0.1429 | 57.0 | 114 | 0.1134 | 0.9773 |
| 0.1429 | 58.0 | 116 | 0.2312 | 0.9091 |
| 0.1429 | 59.0 | 118 | 0.2826 | 0.9091 |
| 0.1325 | 60.0 | 120 | 0.2417 | 0.9091 |
| 0.1325 | 61.0 | 122 | 0.1393 | 0.9318 |
| 0.1325 | 62.0 | 124 | 0.2178 | 0.9318 |
| 0.1325 | 63.0 | 126 | 0.3991 | 0.9091 |
| 0.1325 | 64.0 | 128 | 0.3325 | 0.9091 |
| 0.1481 | 65.0 | 130 | 0.2327 | 0.9091 |
| 0.1481 | 66.0 | 132 | 0.2885 | 0.9091 |
| 0.1481 | 67.0 | 134 | 0.3576 | 0.9091 |
| 0.1481 | 68.0 | 136 | 0.2686 | 0.9318 |
| 0.1481 | 69.0 | 138 | 0.1717 | 0.9545 |
| 0.1237 | 70.0 | 140 | 0.1493 | 0.9545 |
| 0.1237 | 71.0 | 142 | 0.1429 | 0.9318 |
| 0.1237 | 72.0 | 144 | 0.1790 | 0.9318 |
| 0.1237 | 73.0 | 146 | 0.1590 | 0.9318 |
| 0.1237 | 74.0 | 148 | 0.1971 | 0.8864 |
| 0.105 | 75.0 | 150 | 0.2229 | 0.9318 |
| 0.105 | 76.0 | 152 | 0.1789 | 0.8864 |
| 0.105 | 77.0 | 154 | 0.1671 | 0.9545 |
| 0.105 | 78.0 | 156 | 0.2435 | 0.9318 |
| 0.105 | 79.0 | 158 | 0.2658 | 0.9318 |
| 0.0923 | 80.0 | 160 | 0.2092 | 0.9318 |
| 0.0923 | 81.0 | 162 | 0.1748 | 0.9318 |
| 0.0923 | 82.0 | 164 | 0.1727 | 0.9318 |
| 0.0923 | 83.0 | 166 | 0.1945 | 0.9091 |
| 0.0923 | 84.0 | 168 | 0.2429 | 0.9318 |
| 0.1033 | 85.0 | 170 | 0.2796 | 0.9318 |
| 0.1033 | 86.0 | 172 | 0.2548 | 0.9318 |
| 0.1033 | 87.0 | 174 | 0.2379 | 0.9091 |
| 0.1033 | 88.0 | 176 | 0.2409 | 0.9091 |
| 0.1033 | 89.0 | 178 | 0.2421 | 0.9091 |
| 0.1073 | 90.0 | 180 | 0.2332 | 0.9091 |
| 0.1073 | 91.0 | 182 | 0.2231 | 0.9091 |
| 0.1073 | 92.0 | 184 | 0.2153 | 0.9318 |
| 0.1073 | 93.0 | 186 | 0.2088 | 0.9318 |
| 0.1073 | 94.0 | 188 | 0.2058 | 0.9318 |
| 0.104 | 95.0 | 190 | 0.2040 | 0.9318 |
| 0.104 | 96.0 | 192 | 0.2046 | 0.9318 |
| 0.104 | 97.0 | 194 | 0.2043 | 0.9318 |
| 0.104 | 98.0 | 196 | 0.2056 | 0.9318 |
| 0.104 | 99.0 | 198 | 0.2081 | 0.9318 |
| 0.0896 | 100.0 | 200 | 0.2097 | 0.9318 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
asude55/youtube-da22 | asude55 | 2024-05-20T20:02:32Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T16:39:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LoneStriker/Yi-1.5-34B-32K-3.0bpw-h6-exl2 | LoneStriker | 2024-05-20T20:01:18Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T19:55:35Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
matthieuzone/MIMOLETTEbis | matthieuzone | 2024-05-20T20:00:03Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:51:54Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MIMOLETTEbis
<Gallery />
## Model description
These are matthieuzone/MIMOLETTEbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MIMOLETTEbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
matthieuzone/MAROILLESbis | matthieuzone | 2024-05-20T19:51:40Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:43:29Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MAROILLESbis
<Gallery />
## Model description
These are matthieuzone/MAROILLESbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MAROILLESbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
BilalMuftuoglu/beit-base-patch16-224-75-fold5 | BilalMuftuoglu | 2024-05-20T19:35:52Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T19:10:41Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-75-fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9534883720930233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-75-fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2664
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6862 | 0.5116 |
| No log | 2.0 | 4 | 0.5913 | 0.7209 |
| No log | 3.0 | 6 | 0.7204 | 0.6977 |
| No log | 4.0 | 8 | 0.5995 | 0.6977 |
| 0.6162 | 5.0 | 10 | 0.4235 | 0.8140 |
| 0.6162 | 6.0 | 12 | 0.3975 | 0.8140 |
| 0.6162 | 7.0 | 14 | 0.6029 | 0.7674 |
| 0.6162 | 8.0 | 16 | 0.4670 | 0.8140 |
| 0.6162 | 9.0 | 18 | 0.3448 | 0.8372 |
| 0.4312 | 10.0 | 20 | 0.4464 | 0.8372 |
| 0.4312 | 11.0 | 22 | 0.3396 | 0.8605 |
| 0.4312 | 12.0 | 24 | 0.4007 | 0.8372 |
| 0.4312 | 13.0 | 26 | 0.3398 | 0.8140 |
| 0.4312 | 14.0 | 28 | 0.4276 | 0.8605 |
| 0.3453 | 15.0 | 30 | 0.4336 | 0.8605 |
| 0.3453 | 16.0 | 32 | 0.3777 | 0.8140 |
| 0.3453 | 17.0 | 34 | 0.5910 | 0.8140 |
| 0.3453 | 18.0 | 36 | 0.6095 | 0.8140 |
| 0.3453 | 19.0 | 38 | 0.3570 | 0.8140 |
| 0.3288 | 20.0 | 40 | 0.5202 | 0.8140 |
| 0.3288 | 21.0 | 42 | 0.5604 | 0.8140 |
| 0.3288 | 22.0 | 44 | 0.2949 | 0.8372 |
| 0.3288 | 23.0 | 46 | 0.3442 | 0.8837 |
| 0.3288 | 24.0 | 48 | 0.2820 | 0.8372 |
| 0.2571 | 25.0 | 50 | 0.3240 | 0.8605 |
| 0.2571 | 26.0 | 52 | 0.2909 | 0.8837 |
| 0.2571 | 27.0 | 54 | 0.2429 | 0.8837 |
| 0.2571 | 28.0 | 56 | 0.2280 | 0.9302 |
| 0.2571 | 29.0 | 58 | 0.3984 | 0.8605 |
| 0.2012 | 30.0 | 60 | 0.2905 | 0.8605 |
| 0.2012 | 31.0 | 62 | 0.2509 | 0.9070 |
| 0.2012 | 32.0 | 64 | 0.2888 | 0.8605 |
| 0.2012 | 33.0 | 66 | 0.2689 | 0.8605 |
| 0.2012 | 34.0 | 68 | 0.2417 | 0.8837 |
| 0.1814 | 35.0 | 70 | 0.2418 | 0.9070 |
| 0.1814 | 36.0 | 72 | 0.2491 | 0.9070 |
| 0.1814 | 37.0 | 74 | 0.2998 | 0.9070 |
| 0.1814 | 38.0 | 76 | 0.2744 | 0.9302 |
| 0.1814 | 39.0 | 78 | 0.2664 | 0.9535 |
| 0.1555 | 40.0 | 80 | 0.2160 | 0.9302 |
| 0.1555 | 41.0 | 82 | 0.3875 | 0.9070 |
| 0.1555 | 42.0 | 84 | 0.4608 | 0.9070 |
| 0.1555 | 43.0 | 86 | 0.2978 | 0.9302 |
| 0.1555 | 44.0 | 88 | 0.4461 | 0.8837 |
| 0.1459 | 45.0 | 90 | 0.3603 | 0.9070 |
| 0.1459 | 46.0 | 92 | 0.2973 | 0.9302 |
| 0.1459 | 47.0 | 94 | 0.3385 | 0.8837 |
| 0.1459 | 48.0 | 96 | 0.3239 | 0.8837 |
| 0.1459 | 49.0 | 98 | 0.4315 | 0.8837 |
| 0.1372 | 50.0 | 100 | 0.3519 | 0.8837 |
| 0.1372 | 51.0 | 102 | 0.4148 | 0.8837 |
| 0.1372 | 52.0 | 104 | 0.4687 | 0.8837 |
| 0.1372 | 53.0 | 106 | 0.3287 | 0.8837 |
| 0.1372 | 54.0 | 108 | 0.3194 | 0.9070 |
| 0.1049 | 55.0 | 110 | 0.3703 | 0.8837 |
| 0.1049 | 56.0 | 112 | 0.3522 | 0.9070 |
| 0.1049 | 57.0 | 114 | 0.2572 | 0.9070 |
| 0.1049 | 58.0 | 116 | 0.2523 | 0.9070 |
| 0.1049 | 59.0 | 118 | 0.3136 | 0.9070 |
| 0.1143 | 60.0 | 120 | 0.3638 | 0.9070 |
| 0.1143 | 61.0 | 122 | 0.2916 | 0.9535 |
| 0.1143 | 62.0 | 124 | 0.2521 | 0.9302 |
| 0.1143 | 63.0 | 126 | 0.2735 | 0.9302 |
| 0.1143 | 64.0 | 128 | 0.3112 | 0.9302 |
| 0.0885 | 65.0 | 130 | 0.3246 | 0.9302 |
| 0.0885 | 66.0 | 132 | 0.3264 | 0.9070 |
| 0.0885 | 67.0 | 134 | 0.3351 | 0.9302 |
| 0.0885 | 68.0 | 136 | 0.3455 | 0.9302 |
| 0.0885 | 69.0 | 138 | 0.3579 | 0.9302 |
| 0.1064 | 70.0 | 140 | 0.3926 | 0.9302 |
| 0.1064 | 71.0 | 142 | 0.4370 | 0.9070 |
| 0.1064 | 72.0 | 144 | 0.4149 | 0.9302 |
| 0.1064 | 73.0 | 146 | 0.3315 | 0.9535 |
| 0.1064 | 74.0 | 148 | 0.2704 | 0.9302 |
| 0.1047 | 75.0 | 150 | 0.2600 | 0.9302 |
| 0.1047 | 76.0 | 152 | 0.3215 | 0.9535 |
| 0.1047 | 77.0 | 154 | 0.4110 | 0.9302 |
| 0.1047 | 78.0 | 156 | 0.4414 | 0.8837 |
| 0.1047 | 79.0 | 158 | 0.3589 | 0.9302 |
| 0.0937 | 80.0 | 160 | 0.3085 | 0.9535 |
| 0.0937 | 81.0 | 162 | 0.2889 | 0.9535 |
| 0.0937 | 82.0 | 164 | 0.2787 | 0.9535 |
| 0.0937 | 83.0 | 166 | 0.3251 | 0.9535 |
| 0.0937 | 84.0 | 168 | 0.4483 | 0.9070 |
| 0.0748 | 85.0 | 170 | 0.5490 | 0.8605 |
| 0.0748 | 86.0 | 172 | 0.5422 | 0.8605 |
| 0.0748 | 87.0 | 174 | 0.5282 | 0.8837 |
| 0.0748 | 88.0 | 176 | 0.5733 | 0.8605 |
| 0.0748 | 89.0 | 178 | 0.5978 | 0.8605 |
| 0.0834 | 90.0 | 180 | 0.5763 | 0.8605 |
| 0.0834 | 91.0 | 182 | 0.5270 | 0.8605 |
| 0.0834 | 92.0 | 184 | 0.4946 | 0.8837 |
| 0.0834 | 93.0 | 186 | 0.4881 | 0.9070 |
| 0.0834 | 94.0 | 188 | 0.5115 | 0.8605 |
| 0.1016 | 95.0 | 190 | 0.5445 | 0.8605 |
| 0.1016 | 96.0 | 192 | 0.5537 | 0.8605 |
| 0.1016 | 97.0 | 194 | 0.5451 | 0.8605 |
| 0.1016 | 98.0 | 196 | 0.5323 | 0.8605 |
| 0.1016 | 99.0 | 198 | 0.5190 | 0.8837 |
| 0.0657 | 100.0 | 200 | 0.5155 | 0.8837 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bhaskars113/toyota-paint-attribute-1.2 | bhaskars113 | 2024-05-20T19:28:56Z | 7 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] | text-classification | 2024-05-20T19:28:24Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
widget:
- text: I think it sounds pretty good, especially for a pic disc! Sounds on par with
rose pink Cadillac and probably better than smooth big cat. My one issue is I
have a few skips in the first song....but I'm using my backup scratch needle right
now so I'm not sure if it's actually the record The sea glass looks super cool
too, cheers!
- text: Nice. Were the chrome strips on the power assist steps wrapped or painted?
Thinking of dechroming mine and thinking the vinyl will get scuffed off pretty
quickly.
- text: Oh and consider yourself blessed you got meteorite, I have sonic and swirl
marks and scratches are so easily seen, with grey it hides much better
- text: https://preview.redd.it/by2gzb77m2wa1.jpeg?width=1284&format=pjpg&auto=webp&s=6d38c244f6a82b6af4b4eebe91c59f60536f289e
Under the light the paint looks terrible but outside of that, the car is sooo
clean. Wish I could add more than one pic. The interior and everything mechanical
is just amazingly clean.
- text: Not true. Once oxidation has begun there’s no stopping it you can minimize
the oxidation of the affected area by coating it but you can’t stop it
pipeline_tag: text-classification
inference: true
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 36 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 19 | <ul><li>'I usually come on to give a hard time to ‘frame rust’ posts. But damn. This thing must have dedicated parking spot, in the ocean. I never expected it to be that bad. But in saying that the truck will be fine for 20-30 years.'</li><li>'Laser cut or etched with some sort of acid? Probably not just the bumper. Take a look at the frame and suspension. Make sure its not rusting. And if it is maybe you should get a frame coating of sorts'</li><li>'This level of rust is common from my experience, these frames are coated with a black wax like material instead of paint. Eventually this wax material wears off and then corrosion starts. It might look ugly but as long as there are no holes or cracks the frame is fine. If you want to make it look better visually you can spray oil on the frame (like chainsaw chain lube, fluid film etc)'</li></ul> |
| 13 | <ul><li>'When I want to get it cleaned quickly I do the touch less car wash there isn’t many so you’ll have to find one that does it. Also gotta be careful what kind of cleaning chemicals they use cause some products can damage the car even touch less car wash and the pressure it uses. Now if I want to take my time and makes sure it’s done right then I have my own cleaning set up for car washing it. I use Adam polishes stuff from detail to shampoo etc. I also got the Adam polishes pressure washer and the Adam polished towels to dry it up. I recommend watching a lot of videos on YouTube there’s plenty of information of what to do and not to do if you decide to clean it yourself cause you don’t wanna mess it up and cost you more to do a paint correction. Hopefully this helps? ??'</li><li>'If you find one that you like, I’ve found that installing a little blue painters tape under the rack mounting points keeps the hardware from marking up the paint. Make sure it is the blue kind, it comes off much easier when you go to remove it.'</li><li>'Washing the car by hand has always felt like the only option to me. A good electric power washer with the right chemicals makes it easy and wax keeps the dirt off for a good amount of time.'</li></ul> |
| 18 | <ul><li>'Love the color'</li><li>'Ufff she’s gleaming like a ?? stunning ??'</li><li>'The spray paint is cool (and encouraged), but I do wish I could have seen them freshly planted and shiny as well. ... I still say we do this when we retire the current racecars ;) #SubaruRanch \n \n #CadillacRanch #Amarillo\n #TX #automotive #rt66 #America #americanhistory\n #travel #adventure #subaru'</li></ul> |
| 20 | <ul><li>'Yeah I don’t want to be putting plastidip or something on a brand new truck. This would be a lot but what about getting them painted gloss black to match the grille on the inside and and have the chrome be matte black similar to some of the trim'</li><li>'**Foose** **F104 LEGEND** **Gloss Black Milled, i think would look great, and are only about 270.00 for Foose good price**'</li><li>'Nice, love the matte paint/wrap!'</li></ul> |
| 15 | <ul><li>'True are you taking New customer I have a 1937 buick no dent little surface rust'</li><li>"Steelbrush, steel wool, clean with vinegar solution and finish with flex seal. Stops water and oxygen. No oxygen means no oxidation which means no rust. Back in the day they would save rust and add it to their paint or whitewash which gave the barn it's distinctive color which is often imitated but rarely duplicated."</li></ul> |
| 22 | <ul><li>'I’ve got a 2021 Gretsch g6128t-57 duo jet in Cadillac green. The guitar is in excellent condition with some minor scratches and swirls too hard to photograph. Got it from a fellow forum brother about 5 months ago for my first foray into Gretsch guitars.'</li><li>'Hopefully you don’t find too many surprises when you take the paint off. Yeah, it’s hard to tell condition from pictures. Good luck with your project.'</li><li>"$8,999 but just test drove it and I don't think im going to get it because it looked a lot better in the pictures. Has been kept dirty on the lot and there are a lot of swirl marks and it's just not in the best condition as I hoped. Some things are wrong with the interior too. The screen In the dash has some weird spots and there's obviously some electrical issues because the interior lights don't turn off lol"</li></ul> |
| 2 | <ul><li>'I had a similar situation with a black GMC truck and an APC. The roof and top edge of the hood had visible oxidation setting in. '</li><li>'Finally got it detailed. Car looked great from a distance, but up close had lot of oxidation from sitting up for a while. '</li><li>'Have this on my charger hood, small oxidation spot, I was told a new hood is the best option (OEM) does getting it repainted actually work? It was a very reputable non dealer body shop'</li></ul> |
| 21 | <ul><li>"Click for more info and reviews of this Malone Watersport Carriers:\n \n https://www.etrailer.com/Watersport-Carriers/Malone/MPG107MD.html\n \n Check out some similar Watersport Carriers options:\n \n https://www.etrailer.com/dept-pg-Watersport_Carriers-sf-Kayak.aspx\n \n \n \n Search for other popular Chevrolet Equinox parts and accessories:\n \n https://www.etrailer.com/vehicle/2020/Chevrolet/Equinox\n \n \n \n https://www.etrailer.com\n \n Don’t forget to subscribe! https://www.youtube.com/user/etrailertv\n \n \n \n Full transcript: https://www.etrailer.com/tv-install-malone-seawing-kayak-carrier-2020-chevrolet-equinox-mpg107md.aspx\n \n Hey everyone, Charles here at etrailer. And today we're taking a look at the Malone SeaWing Kayak Carrier on the 2020 Chevrolet Equinox. These are gonna be your saddle style kayak carrier. So they're gonna be great for your extra long or your extra wide kayaks that don't fit in a J-style carrier. On our 54 inch crossbars, we still have plenty of room for another set of these for another kayak or even a bike or a cargo basket. These are gonna be made out of a polycarbonate material. They're gonna be very durable and corrosion resistant. They come with the nylon straps as well as the bow and stern tie-downs. And these are gonna fit most of your factory cross bars. We have these on the arrow style and they fit nicely on those, but as well as your square and your round bars as well. And on the inside of the saddle, there is a nice, it's like a thick rubber with grooves for added traction and protection for your kayak. Weight capacity is gonna handle your kayaks of up to 75 pounds. And one thing that I really like about this is that you don't need any tools to install it, So it's very quick to install and uninstall. Just make sure that you have plenty of clearance right here for your crossbar since you do have to twist the knobs underneath your crossbars. Your saddle style kayak carriers are gonna give you extra clearance compared to your J-style kayak carriers. So that's gonna be beneficial for you if you have a larger vehicle like Yukon so that you can park into your garage or go through a drive-through or anything like that. So overall, these are a very solid and durable build, that's gonna last you a long time and it's gonna be perfect for you if you have your extra long or extra wide kayaks. To begin the installation, we have everything laid out that we're gonna use. Malone does provide us with two sets of bolts. A small and a large. We are using the large for our arrow style crossbars that we have. We installed the front carry already but we'll do the rear together. So I'm just gonna stick the bolts through. And then this padded bar, the groove is going to face this way and we're just gonna loosely, loosely screw this guy on. If I get to the same spot, I like to squeeze them in at the same time so that we get an even distribution and so that the carrier isn't lopsided when we tighten them down. All right, so we have the red strap here. We are going to go up through the top. I mean, I guess down to the top and then up through the closer slot here. And we are just gonna set these off to the side until we load a kayak on, then we can just throw it over. This is your rear loading kayak carrier. So if you didn't have the Malone C suction cup attachment that you purchased separately, you can always just lay a thicker pad over here, that way you're not scratching your vehicle. But these are actually close enough to the edge of our arrow bars. And I'm tall enough to just slide it on through the side here. It's not the best way but it gets it done. So now I'm just gonna wrap our straps across the top. Make sure that these are flat. Making sure that this leather part on the buckle is facing our kayak to avoid any metal on our kayak here. And then we're just gonna do the same thing on this side. Going down and then up. Pull that through. Through the leather strap and then up through our buckle here. And then we can just roll up the straps, clean them up to get 'em out of the way. So Malone provides us with the bow and certain tie down straps. They are gonna be a stainless steel S hook. And if you didn't have a hood anchor or anything like that, you can always pick one up here at etrailer. Today, we are using a padded dog bone from our etrailer kayak carrier tie down strap kit. It just makes it a lot easier and we don't have to have any metal on frame contact. So we have it where we want it. So now we can just close the hood and attach our hook right here. So once we have our S hook hooked into our strap right here, we're just gonna pull tight. And then maybe around 15 inches or so, we're gonna make a loop, and then another loop. We're gonna wrap that around and then go back through in the middle. And then we are gonna take our free end over here and slide that through the back. Pulling on this side tight and then pulling down on my left. I'm just gonna go ahead and tie it off."</li><li>'Thank you. The spray paint is holding up well.'</li><li>'My C8 is black. I can say after 8 months, PPF holding up really good: knock on wood!'</li></ul> |
| 8 | <ul><li>'your paint looks great, is it original? Looks super smooth'</li><li>'Personally I prefer black mirror and everything else body color. That paint looks smoooth tho.'</li><li>"I used Adam's advanced graphene ceramic coat. It's billed as 10H and 9+ year durability. The kit was $140 on Amazon and i barely used a quarter of it. The paint feels smooth like glass. It's crazy."</li></ul> |
| 0 | <ul><li>'VW Tiguan has been massacred by deep scratches, I have been experimenting with different pad combinations trying to remove the deep scratches.'</li><li>'Black cars in general don’t hold their value as well as other colors due to black paint showing scratches and swirls extremely easily. These are not investment vehicles, at the end of the day they are economy Chrysler products made with very little concern for quality control.'</li><li>"I'm not sure why they went Gloss black on the GT front but matte on the EB. Glossy sucks to clean and scratches easily."</li></ul> |
| 27 | <ul><li>"First, it looks like your factory screen protector is still on the infotainment and I need to peel it off... Second, piano black interiors attract/show so much dust that you'd swear dust was manufactured by the trim. I keep a very small, tiny version of a California Duster in my car to tidy bits up. One end is like the duster, the other end is like a fat paint brush to get into crevices."</li><li>'The reason we charge high dollar ppf prices is because we budget throw away materials for contamination like that. We run dust extraction machines like crazy in our install room. We also disassemble everything we can within reason we did a Hummer H2 custom bulk install on every panel and charged $15,000. Even at that price, we don’t look forward to doing another one. Customer was thrilled though.'</li><li>'Natural wax is actually oil like and attracts dust and dirt particles (best shine tho) Synthetic sealants / hybrid wax or ceramic / graphene will repel elements.'</li></ul> |
| 9 | <ul><li>'haha yea black is my color, esp with a glossy coat, love the look of shiny black & the vibrant red tail lights! the red is pretty cool too, i like the shade they have. I feel not all cars can pull off such a vibrant red!'</li><li>'Hi All! The Bronco is ready for paint, and I’m torn between these three options. You guys have seen everything and I value your opinion from the heart. Was gonna do candy red but decided it’s too loud for this truck. Matte Graphite - looks insane irl and I think will pop with chrome and black accents. Every manufacturer is coming out with “special” satin/matte paints, and it’s super popular. I’ve always believed the bronco should be a nice glossy paint, and I’m truly happy with either option one or two. But then option three came in to derail my thought process. What do you find folks think would be a logical choice both in terms of aesthetic and maintenance?'</li><li>'Nice look. The car is mirror like with that glossy finish. Wonderful looking Challenger'</li></ul> |
| 3 | <ul><li>'I have a 22 Taos S 4motion. 40k miles. Pluses are that there car is reasonably comfortable and drives well. Decent mileage (28.6). With factory tires it’s been great in the snow. Plenty of room. Downsides - Paint is cracking (see previous post) and they will not cover (warranty on paint ends at 36k).'</li><li>'Paint formulas changed drastically in 2007. They went from an oil base paint to water based, due to EPA regulations. Smaller rock chips, paint cracking, easier to scratch, etc. have gone up ever since.'</li><li>'Thanks! It cost about $300 total. Same reason we repainted it… had too much cracked paint on the hood.'</li></ul> |
| 12 | <ul><li>"The driver's & front sides of the 235 stovebolt 6 engine has been painted blue as in a '56 Chevrolet blue flame engine out of a car (see pics before & after). It happens to be the identical type engine, i.e., from a Chevrolet car & born in 1956 that is in my other antique pick-up. I want the engine looking nice before it's dropped into the '52 Chevrolet Suburban Carryall."</li><li>'Southeast Alabama . That is one awesome looking 90 model truck. I have a 1994 Chevrolet Silverado extended cab that I am and the shop and paint shop are trying kind of go back to factory or as close as we can . It is by no means a 1000 Horse power . Just the plain old 350- 5.7 Throttle body.'</li><li>'I recently bought a 1987 Cadillac Brougham. Mechanically, it is in impeccable shape with only 41,000 original miles on the clock. The only issues are the bumper fillers (that I have replacements for), a few minor dings, and the paint.'</li></ul> |
| 26 | <ul><li>'I’ve got a 2017 diesel Colorado and am happy with it stock emissions 147,000. One thing I learned: DOC is a wear part in addition to DPF. (There is no code for a bad oxidation catalyst just P200C high exhaust temps. I haven’t made master post about it on Coloradofans yet.) Anyway I’m happy but lots of information on chat rooms is confused and not always corrected by others who know. I like the Z71 better but do you ! !'</li><li>"Most recently used that trick on my cousin's 07 infiniti g37 and it blew his mind lol. As always follow up with your LSP of choice. I tried it on my old cadillac cts and it didn't even make a dent in the oxidation, had to break out the sand paper for that one. Thanks, looks like different plastic materials react differently. I wish we still have the good, old glass headlights."</li><li>'Oxidation of the metal. It’s not shiny. Nor the cracks'</li></ul> |
| 7 | <ul><li>'I’m not a big fan of the old F-150, but that paint is sharp, and I love that blue color.'</li><li>'Wow, cool . I am the 3rd owner of a 2001 Ford Ranger Edge pickup, bright island blue metallic paint 131k miles and original paint. The truck still has 80% of its factory installed parts on it today. Not bad for a Maine vehicle.'</li><li>'White, however, is the best and most long lasting color. I bought a 2005 Cadillac Deville (white), and the paint looks new. I have had other examples, the white is the most long lasting color as it does not absorb heat.'</li></ul> |
| 14 | <ul><li>'I was eager to see what they were going to be but as a detail (hobby) guy, even for a garage go queen that paint is a no go. Really think this is a collector money grab. $15k is just too much for paint that is 8k on a Cadillac If you want it, GO FOR IT, but if you don\'t have the whole car wrapped in PPF, road debris is really going to do a number on it if that 15k doesn\'t include some extra mils and durability additive of paint. Not sure it will have a "clear" to polish out imperfections?'</li><li>'That makes no sense and looks horrible. It may be painted but may also be removable. If someone just hand laid it on top of the clear coat, it may be able to be removed. The easy answer is just put a black vinyl stripe over it and forget it ever was there'</li><li>'Originally Posted by SnakeEyeSS (Post 11325666) I was eager to see what they were going to be but as a detail (hobby) guy, even for a garage go queen that paint is a no go. Really think this is a collector money grab. $15k is just too much for paint that is 8k on a Cadillac If you want it, GO FOR IT, but if you don\'t have the whole car wrapped in PPF, road debris is really going to do a number on it if that 15k doesn\'t include some extra mils and durability additive of paint. Not sure it will have a "clear" to polish out imperfections?'</li></ul> |
| 5 | <ul><li>'Both will be tough to keep clean. The gray will be more forgiving when it gets some swirls though. Only way I’d have a black car is if I had a garage to keep it in and it wasn’t my daily lol.'</li><li>'Even when it’s dirty I find the phantom a bit more “forgiving” vs jet black/plain black. Hides it’s age a bit more too since the color is busier vs a straightforward, unforgiving black paint.'</li><li>"I've owned my share of black vehicles and I am too OCD to own them without spending an inordinate amount of time taking care of them. I'm a white, silver and maybe gunmetal grey guy now just because of the maintenance."</li></ul> |
| 16 | <ul><li>'Color match. Now do the mirror caps and the door handles. If you decide to do the “bump strip on the doors, replace them. Don’t paint them. The paint doesn’t stick as well as you’d like in the long run on the plastic chrome.'</li><li>"2014 chevy equinox. There is a very slight shake at highway speed (75mph+) but when I hit the brakes my car turns into the paint mixer at home depot. I haven't noticed it with city driving, only highway."</li><li>"Personally, I've never worked on an Escalade but I've been around Cadillacs for a while. I was taught never to buy an old used Cadillac because of their engineering. If you want to take apart one thing, be prepared to take out everything. Parts are expensive, aluminum cracks and warps. In general I found Cadillacs to be engineering boobie traps with lots of spots to rip your arm open and scratch your hands. I guess that's just my opinion. You seem to like the challenges and I respect you for it."</li></ul> |
| 24 | <ul><li>"MY HD is stock, so no loud pipes. It shakes like a Home Depot paint mixer at idle, but silky smooth on the move. You'll love the Goldwing, just be careful in thinking that you're going from a Ford to a Cadillac in the comfort department."</li><li>'Thoughts on leather conditioners - Apple Leather: puff up quilting about 2-3 applications but take care because it builds up shine, and to lube up stiff leather chain straps ?? - Saphir: gives life to dry grained leather and buffs out scratches - Cadillac: soft smooth leathers like lambskin and an all around mvp safe bet for all types of leather - Leather Honey: gives life to shoes, but ends tragically if used on grained leathers What are your thoughts?? Feel free to disagree / disprove the above!'</li><li>'Same. Smooth that corner, apply touch up paint, call it a day. No one will see it'</li></ul> |
| 1 | <ul><li>'Has the underground color and was commenting the other day about bad paint from the factory. I thought he was crazy until I went to look at one. Sure enough I went to the dealership and the black one I saw looked like it had already been through the car wash several times.'</li><li>'There are known paint defects with Hyundai-Kia white paint. Assuming this is factory paint, you should contact Kia and push them to fix it'</li><li>'My coworker has had his Fit painted 3 times due to shitty factory paint. It would all flake off near the window.'</li></ul> |
| 11 | <ul><li>'Progressive, I had to fight for every dollar. They wanted to take $150 off the cash value they were paying for a tiny scuff mark in the interior plastic in the trunk area of the car, since it was a “pre-accident damageâ€_x009d_ which was total bullshit.'</li><li>'The Bronco Raptor is just as exotic or rare as your base model corvette. Not even a c8r. Calm down you didn’t scuff your shoes.'</li><li>'They managed to get to her, and she suffered no serious injuries, save that her leg was scuffed pretty badly (blue and flathead catfish have no actual teeth, just a rough inner lip like sandpaper), but the experience was very traumatizing and made several newspapers and local TV news broadcasts (allegedly... I never saw this myself, despite my attempts in the past to find evidence for it). I\'ve also spoken personally to several people who have claimed to do underwater work for the lakes in scuba gear (not sure what it is, save that it\'s got something to do with dam maintenance), and they have told me personally that they have seen catfish nesting at the foot of some dams that are "...the size of Buicks." Make of that how you will. There is also, of course, the ever-present rumor of the freshwater octopus in Oklahoma, but...can\'t say I have any experience with that one.'</li></ul> |
| 6 | <ul><li>"Tesla's paint quality isn't the best but if you've ever owned a Honda then you know the pain. Somehow Hyundai is one of the few car companies to figure out how to make really durable paint."</li><li>'Hyundai puts a second coat of white paint on the car to make it more durable, hence the extra cost.'</li><li>"German cars and luxury cars in general will have significantly more durable paint. Honda on their speciality cars (e.g. CTR, NSX) will use harder paint. Aluminum bodied F series trucks and Audi’s usually have pretty solid paint. Mazda's speciality colors and Acura's $6k paint jobs are top notch"</li></ul> |
| 4 | <ul><li>' I practiced on a 1990 Honda Accord that had neglected rough paint and had been sitting for 10 years. '</li><li>'Probably done in shipping. It’s more hassle to get fixed than touch up paint. Had a scrape on my vehicle skirt. The paint issues that are worrisome are factory and usually take some time to appear as ripples, lines, or premature fading.'</li><li>'Hey to all that have a hummer ev. I just took delivery and parked in garage. I noticed in the garage when light hits the right angle the reflection ripples like something is there.'</li></ul> |
| 28 | <ul><li>'My 2018 was peeling when I got it brand new. Instead of having Chevy get me a new one, that would fail again I took some acetone to it to remove the remaining red. Then I bought sharpie paint pens and colored them yellow to match my calipers. Five years later the yellow is still perfect.'</li></ul> |
| 32 | <ul><li>'Hey a little unrelated, but in my C8 (3LT) my leather dash is bubbling and delaminating. My dealer is taking care of it but why is this still an issue with the 3LTs even after being an issue for years on the C7? Is the glue they use different for the different leathers or something?'</li></ul> |
| 25 | <ul><li>'I found a hummer EV with orange paint on the whole body like that!'</li><li>'Oooo paint some engine block orange!'</li><li>'I see some paint, was it orange or red paint originally?'</li></ul> |
| 33 | <ul><li>"We had to rescue a little male black-chinned hummer today. He had somehow managed to skewer a bug and got it stuck, holding his beak closed. We watched him for 2 days, trying to scratch and rub it, but just couldn't get it off."</li><li>"The Chevy Bolt in the chicken coop is to keep animals out, I bet. I have friends who own a Chevy Bolt, and *twice* they've had squirrels eat through the electrical wiring in their car. Apparently, the wires are wrapped in a soy-based coating that hungry animals like to nibble on."</li></ul> |
| 35 | <ul><li>'Squatted, white, late model GMC 2500. Gasser, RWD, with the thinnest of tires. Painted on almost.'</li><li>'Yes my rear seats ultimate leather is thinner than paper and is pealing away after 4 weeks owning a 2024 wtf. I want a lifetime warranty as long as I own the truck cheap cheap painted fake leather'</li><li>'"Alexa, put paint thinner on my shopping list."'</li></ul> |
| 30 | <ul><li>'A good steam clean under carriage and some under coating it’s should be good as new'</li><li>'Surface rust. A wire brush, some rust converter then chassis paint will make the frame look like new.'</li><li>'I feel like Claptons "Cocaine" would be more appropriate with that pristine white paint and the t-tops'</li></ul> |
| 17 | <ul><li>'I absolutely love the C8 in white. The paint matched side intake trim looks amazing, too. Excellent choice. 10/10.'</li><li>'Acs in Canada has perfect carbon flash matching spoilers. I think they paint colors as well. Very high quality'</li></ul> |
| 23 | <ul><li>'Theres no guarantee about the trans, got my 17 w/ 90k miles and still had to fix the shudder myself. Things to look for... Oil lines in engine bay (known to blow around 100k miles) Rust underneath All lighting, inside AND out. The center cluster is known to have dull lights/ bulbs (icons) Use the chair, rock back and forth and everything to see if anything falls short Every detail matters, inspect paint, edges, even under the bed (lot of dust and dirt collects under the tailgate)'</li><li>'Idk how this came up on my feed. Nice paint combo. That town looks boring.'</li><li>'What do you recomend for putting a coating on rubber mats. I hate how dull they look when youre done with a good detail.'</li></ul> |
| 10 | <ul><li>"I'd tape off the bottom of the bumper where it starts to taper down, conveniently where your scratches start....and I'd paint the entire bottom underside bumper black. So all the scratches will be covered in black, then add a splitter. No one would ever know except you. I'd use a decent quality spray paint and make sure to either remove the bumper from the car or get it high enough to cleanly spray the paint proper. 3 or 4 coats of black, couple coats of clear. Maaaybe"</li><li>'What’s going on in pic 3? That looks bizarre!! How could that be missed before delivery. Might have a tougher time with the small interior scratch.'</li><li>'I had a sweet ‘71 Chevy Blazer and some knuckleheads aggressively used a screwdriver to pry out the cheapest ‘97 Jensen CD player scratching up the original metal dash. Insult to injury - disc 4 from Bob Dylan’s box set ‘biograph’ was in there'</li></ul> |
| 29 | <ul><li>'I am original owner of a 2001 Sierra 1500 SLE w/5.3L Z71 Have 152,000mi on it. It’s also part of the infamous cracked head list. The body and frame are in fantastic shape as is the original paint. No rust anywhere. I attribute this to maybe washing it 30x in its lifetime. :) Can I replace the engine with a newer LS or is it best to rebuild these? I’m not looking for massive power, just reliability.'</li></ul> |
| 34 | <ul><li>'Wonder if it traps moisture under it or has any vibration that could cause paint wear. I love the looks of this tho!'</li><li>'I bet it’s from the brushes. The finish on the paint does not like the rough brushes from car washes. When I got my bolt they told me absolutely do not take it to a car wash. Hand wash only or this could happen'</li></ul> |
| 31 | <ul><li>'Customer service Ordered a new leer 100xr topper for my 2021 gmc. Waited over 2months for it to be built and when dealer installed it i noticed that the window gasket is half way out on one side and then there is a couple rough grinder marks on the lower edges. Not to mention when the dealer installed it they mounted it out of alignment and by the time i drove from dealer to my house it already rubbed the paint on the corner of my bedside above taillight completely off down to the bare metal.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("bhaskars113/toyota-paint-attribute-1.2")
# Run inference
preds = model("Oh and consider yourself blessed you got meteorite, I have sonic and swirl marks and scratches are so easily seen, with grey it hides much better")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 46.2451 | 924 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 16 |
| 1 | 16 |
| 2 | 16 |
| 3 | 16 |
| 4 | 16 |
| 5 | 16 |
| 6 | 16 |
| 7 | 16 |
| 8 | 16 |
| 9 | 16 |
| 10 | 4 |
| 11 | 11 |
| 12 | 20 |
| 13 | 13 |
| 14 | 16 |
| 15 | 2 |
| 16 | 20 |
| 17 | 2 |
| 18 | 8 |
| 19 | 5 |
| 20 | 14 |
| 21 | 15 |
| 22 | 3 |
| 23 | 5 |
| 24 | 18 |
| 25 | 3 |
| 26 | 13 |
| 27 | 7 |
| 28 | 1 |
| 29 | 1 |
| 30 | 4 |
| 31 | 1 |
| 32 | 1 |
| 33 | 2 |
| 34 | 2 |
| 35 | 4 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0011 | 1 | 0.1689 | - |
| 0.0563 | 50 | 0.2155 | - |
| 0.1126 | 100 | 0.139 | - |
| 0.1689 | 150 | 0.0656 | - |
| 0.2252 | 200 | 0.0359 | - |
| 0.2815 | 250 | 0.0462 | - |
| 0.3378 | 300 | 0.0182 | - |
| 0.3941 | 350 | 0.0235 | - |
| 0.4505 | 400 | 0.0401 | - |
| 0.5068 | 450 | 0.042 | - |
| 0.5631 | 500 | 0.0461 | - |
| 0.6194 | 550 | 0.0034 | - |
| 0.6757 | 600 | 0.0181 | - |
| 0.7320 | 650 | 0.0094 | - |
| 0.7883 | 700 | 0.0584 | - |
| 0.8446 | 750 | 0.0175 | - |
| 0.9009 | 800 | 0.0036 | - |
| 0.9572 | 850 | 0.0274 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
dsfsi/simcse-dna | dsfsi | 2024-05-20T19:28:53Z | 36 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"DNA",
"biology",
"genomics",
"protein",
"kmer",
"cancer",
"gleason-grade-group",
"arxiv:2104.08821",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T08:57:57Z | ---
license: cc-by-sa-4.0
tags:
- DNA
- biology
- genomics
- protein
- kmer
- cancer
- gleason-grade-group
---
## Project Description
This repository contains the trained model for our paper: **Fine-tuning a Sentence Transformer for DNA & Protein tasks** that is currently under review at BMC Bioinformatics. This model, called **simcse-dna**; is based on the original implementation of **SimCSE [1]**. The original model was adapted for DNA downstream tasks by training it on a small sample size k-mer tokens generated from the human reference genome, and can be used to generate sentence embeddings for DNA tasks.
### Prerequisites
-----------
Please see the original [SimCSE](https://github.com/princeton-nlp/SimCSE) for installation details. The model will als be hosted on Zenodo (DOI: 10.5281/zenodo.11046580).
### Usage
Run the following code to get the sentence embeddings:
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Import trained model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("dsfsi/simcse-dna")
model = AutoModel.from_pretrained("dsfsi/simcse-dna")
#sentences is your list of n DNA tokens of size 6
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Get the embeddings
with torch.no_grad():
embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output
```
The retrieved embeddings can be utilized as input for a machine learning classifier to perform classification.
## Performance on evaluation tasks
Find out more about the datasets and access in the paper **(TBA)**
### Task 1: Detection of colorectal cancer cases (after oversampling)
| | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 91 | 63 |
| Random Forest | **94** | **71** |
| XGBoost | 93 | 66 |
| CNN | 42 | 52 |
| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 91 | 66 |
| Random Forest | **94** | **72** |
| XGBoost | 93 | 66 |
| CNN | 41 | 60 |
### Task 2: Prediction of the Gleason grade group (after oversampling)
| | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 97 | 68 |
| Random Forest | **98** | **78** |
| XGBoost |97 | 70 |
| CNN | 35 | 50 |
| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 97 | 70 |
| Random Forest | **98** | **80** |
| XGBoost |97 | 70 |
| CNN | 33 | 59 |
### Task 3: Detection of human TATA sequences (after oversampling)
| | 5-fold Cross Validation accuracy | Test accuracy |
| --- | --- | ---|
| LightGBM | 98 | 93 |
| Random Forest | **99** | **96** |
| XGBoost |**99** | 95 |
| CNN | 38 | 59 |
| | 5-fold Cross Validation F1 | Test F1 |
| --- | --- | ---|
| LightGBM | 98 | 92 |
| Random Forest | **99** | **95** |
| XGBoost | **99** | 92 |
| CNN | 58 | 10 |
## Authors
-----------
* Mpho Mokoatle, Vukosi Marivate, Darlington Mapiye, Riana Bornman, Vanessa M. Hayes
* Contact details : [email protected]
## Citation
-----------
Bibtex Reference **TBA**
### References
<a id="1">[1]</a>
Gao, Tianyu, Xingcheng Yao, and Danqi Chen. "Simcse: Simple contrastive learning of sentence embeddings." arXiv preprint arXiv:2104.08821 (2021). |
ssmits/Falcon2-5.5B-Swedish-GGUF | ssmits | 2024-05-20T19:28:18Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"sv",
"base_model:tiiuae/falcon-11B",
"base_model:quantized:tiiuae/falcon-11B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T18:43:10Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- tiiuae/falcon-11B
license: apache-2.0
language:
- sv
---
# ssmits/Falcon2-5.5B-Swedish-Q5_K_M-GGUF
This model was converted to GGUF format from [`ssmits/Falcon2-5.5B-Swedish`](https://huggingface.co/ssmits/Falcon2-5.5B-Swedish) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ssmits/Falcon2-5.5B-Swedish) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ssmits/Falcon2-5.5B-Swedish-Q5_K_M-GGUF --model falcon2-5.5b-swedish.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ssmits/Falcon2-5.5B-Swedish-Q5_K_M-GGUF --model falcon2-5.5b-swedish.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m falcon2-5.5b-swedish.Q5_K_M.gguf -n 128
``` |
ssmits/Falcon2-5.5B-German-GGUF | ssmits | 2024-05-20T19:28:02Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"de",
"base_model:tiiuae/falcon-11B",
"base_model:quantized:tiiuae/falcon-11B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T18:35:26Z | ---
language:
- de
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model:
- tiiuae/falcon-11B
---
# ssmits/Falcon2-5.5B-German-Q5_K_M-GGUF
This model was converted to GGUF format from [`ssmits/Falcon2-5.5B-German`](https://huggingface.co/ssmits/Falcon2-5.5B-German) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ssmits/Falcon2-5.5B-German) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ssmits/Falcon2-5.5B-German-Q5_K_M-GGUF --model falcon2-5.5b-german.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ssmits/Falcon2-5.5B-German-Q5_K_M-GGUF --model falcon2-5.5b-german.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m falcon2-5.5b-german.Q5_K_M.gguf -n 128
``` |
mizoru/whisper-large-ru-ORD_0.9_peft_0.2 | mizoru | 2024-05-20T19:27:57Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"ru",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T15:24:08Z | ---
language:
- ru
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-large-v2
metrics:
- wer
model-index:
- name: 'Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru '
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mizoru/ORD/runs/te5djaa5)
# Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ORD_0.9 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9988
- Wer: 48.4439
- Cer: 26.5242
- Clean Wer: 40.8650
- Clean Cer: 20.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Clean Cer | Clean Wer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:-------:|:---------:|:---------:|:---------------:|:-------:|
| 1.216 | 1.0 | 550 | 27.9352 | 22.0432 | 43.2693 | 1.0350 | 50.7505 |
| 1.1847 | 2.0 | 1100 | 26.5324 | 20.9303 | 41.2903 | 1.0187 | 49.1670 |
| 1.055 | 3.0 | 1650 | 26.7141 | 21.0494 | 41.5960 | 0.9889 | 48.8428 |
| 0.9137 | 4.0 | 2200 | 0.9988 | 48.4439 | 26.5242 | 40.8650 | 20.9832 |
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 |
hfdsajkfd/distilbert-base-uncased-finetuned-ner | hfdsajkfd | 2024-05-20T19:27:42Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-20T19:23:21Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9259054770318021
- name: Recall
type: recall
value: 0.9380243875153821
- name: F1
type: f1
value: 0.9319255348707974
- name: Accuracy
type: accuracy
value: 0.9837641190207635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Precision: 0.9259
- Recall: 0.9380
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2448 | 1.0 | 878 | 0.0713 | 0.8957 | 0.9193 | 0.9074 | 0.9796 |
| 0.0517 | 2.0 | 1756 | 0.0597 | 0.9206 | 0.9357 | 0.9281 | 0.9830 |
| 0.0314 | 3.0 | 2634 | 0.0613 | 0.9259 | 0.9380 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Arshia-HZ/NLP-AriaBert-Digimag | Arshia-HZ | 2024-05-20T19:26:20Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T18:33:40Z | ---
license: apache-2.0
language:
- fa
widget:
- text: "دختری در قطار؛ پرفروشترین کتاب نیویورکتایمز را امروز رایگان بخوانید کتاب دختری در قطار هدیه امروز فیدیبو است."
- text: "استرینگکست: با ترسناکترین بیماری جهان آشنا شوید با گذر زمان و پیشرفت امکانات، سن انسانها روز بهروز بیشتر میشود. ولی با این بالا رفتن سن، بیماریهای جدید و خطرناکی خودشون را به ما نشان میدهند."
---
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### DigiMag
A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes.
1. Video Games
2. Shopping Guide
3. Health Beauty
4. Science Technology
5. General
6. Art Cinema
7. Books Literature
| Label | # |
|:------------------:|:----:|
| Video Games | 1967 |
| Shopping Guide | 125 |
| Health Beauty | 1610 |
| Science Technology | 2772 |
| General | 120 |
| Art Cinema | 1667 |
| Books Literature | 254 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Digikala Magazine | 93.65* | 93.59 | 90.72 | |
konstaya/qa_model_study_1 | konstaya | 2024-05-20T19:19:59Z | 131 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:sberquad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-20T17:12:23Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- sberquad
model-index:
- name: qa_model_study_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_model_study_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sberquad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1351 | 1.0 | 750 | 2.6338 |
| 2.5385 | 2.0 | 1500 | 2.4813 |
| 2.3433 | 3.0 | 2250 | 2.4337 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
1aurent/vit_giant_patch14_224.dinobloom | 1aurent | 2024-05-20T19:19:12Z | 26 | 0 | timm | [
"timm",
"safetensors",
"feature-extraction",
"image-classification",
"arxiv:2404.05022",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2024-05-20T18:59:53Z | ---
tags:
- timm
- feature-extraction
- image-classification
library_name: timm
license: apache-2.0
---
# Model card for vit_giant_patch14_224.dinobloom

## Model Details
- **Model Type:** Feature backbone
- **Model Stats:**
- Params: 1136M (giant)
- Image size: 224 x 224 x 3
- Patch size: 14 x 14 x 3
- **Repository:** [github.com:marrlab/DinoBloom](https://github.com/marrlab/DinoBloom)
- **Original Weights:** [Zenodo](https://zenodo.org/records/10908163)
- **License:** [Apache License 2.0](https://github.com/marrlab/DinoBloom/blob/main/LICENSE)
- **Papers:**
- [DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology](https://arxiv.org/abs/2404.05022)
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://raw.githubusercontent.com/zxaoyou/segmentation_WBC/master/Dataset%201/001.bmp"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_giant_patch14_224.dinobloom",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
output = model(data) # output is a (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@misc{koch2024dinobloom,
title = {DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology},
author = {Valentin Koch and Sophia J. Wagner and Salome Kazeminia and Ece Sancar and Matthias Hehr and Julia Schnabel and Tingying Peng and Carsten Marr},
year = {2024},
eprint = {2404.05022},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
``` |
matthieuzone/FETAbis | matthieuzone | 2024-05-20T19:17:55Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:09:37Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/FETAbis
<Gallery />
## Model description
These are matthieuzone/FETAbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/FETAbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
LoneStriker/Yi-1.5-34B-32K-GGUF | LoneStriker | 2024-05-20T19:12:26Z | 15 | 5 | null | [
"gguf",
"arxiv:2403.04652",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T18:17:38Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
HariprasathSB/whisper-vulnerablee | HariprasathSB | 2024-05-20T19:11:16Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:HariprasathSB/whisper-vulnerable",
"base_model:finetune:HariprasathSB/whisper-vulnerable",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T17:05:07Z | ---
license: apache-2.0
base_model: HariprasathSB/whisper-vulnerable
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-vulnerablee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-vulnerablee
This model is a fine-tuned version of [HariprasathSB/whisper-vulnerable](https://huggingface.co/HariprasathSB/whisper-vulnerable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0136
- Wer: 77.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0637 | 1.7621 | 200 | 1.0136 | 77.9557 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/EPOISSESbis | matthieuzone | 2024-05-20T19:09:21Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T19:01:09Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/EPOISSESbis
<Gallery />
## Model description
These are matthieuzone/EPOISSESbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/EPOISSESbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
PabitraJiban/Credit-card-collection-intent-classification | PabitraJiban | 2024-05-20T19:02:56Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T19:00:22Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8798
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0973 | 1.0 | 2 | 1.0807 | 0.4667 |
| 1.0801 | 2.0 | 4 | 1.0622 | 0.5333 |
| 1.0713 | 3.0 | 6 | 1.0386 | 0.5333 |
| 1.0396 | 4.0 | 8 | 1.0092 | 0.6 |
| 1.0034 | 5.0 | 10 | 0.9786 | 0.8 |
| 0.9929 | 6.0 | 12 | 0.9501 | 0.8667 |
| 0.9552 | 7.0 | 14 | 0.9236 | 0.8667 |
| 0.9386 | 8.0 | 16 | 0.9011 | 0.8667 |
| 0.9084 | 9.0 | 18 | 0.8862 | 0.8667 |
| 0.897 | 10.0 | 20 | 0.8798 | 0.8667 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
UsmanGhias/IceAge | UsmanGhias | 2024-05-20T19:02:29Z | 0 | 1 | null | [
"tensorflow",
"gradio",
"image-processing",
"glaciers",
"en",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T18:07:22Z | ---
language: en
tags:
- tensorflow
- gradio
- image-processing
- glaciers
license: apache-2.0
datasets:
- custom
metrics:
- accuracy
widget:
- text: "Upload an image of a glacier to predict boundaries."
---
Here's a complete and enhanced version of your Gradio interface documentation for the SGDNet model. This documentation can be part of your model card on Hugging Face or included as a `README.md` in your project repository. It provides clear instructions on setup, usage, and how to interact with the model through Gradio.
---
# SGDNet Gradio Interface
This is a Gradio interface for the SGDNet model, designed to extract glacier boundaries from multisource remote sensing data. The interface provides a user-friendly method to upload satellite images and visualize the predicted glacier boundaries.
## Setup Instructions
Follow these steps to get the Gradio interface up and running on your local machine:
### Prerequisites
Ensure you have Python installed on your system. The interface is built using Gradio, and the model is implemented in TensorFlow.
### Installation
1. **Clone the repository:**
Ensure you have git installed and then clone the repository containing the SGDNet model and the Gradio interface code.
```bash
git clone https://huggingface.co/your_username/SGDNet-gradio
cd SGDNet-gradio
```
2. **Install the required packages:**
Use pip to install the required Python packages from the `requirements.txt` file.
```bash
pip install -r requirements.txt
```
### Running the Interface
1. **Start the Gradio app:**
Run the Gradio interface using the command below. This command executes the Python script that launches the Gradio interface.
```bash
python gradio_app.py
```
2. **Access the Interface:**
Open your web browser and navigate to the URL provided in the command line output (typically `http://127.0.0.1:7860`). This URL hosts your interactive Gradio interface.
## How to Use the Interface
- **Upload Image**: Click on the upload area or drag and drop an image file to upload a satellite image of a glacier.
- **Submit Image**: After uploading the image, click the "Predict" button to process the image through the SGDNet model.
- **View Results**: The interface will display the original image alongside the glacier boundary predictions, allowing you to compare and analyze the results.
## Features
- **Interactive Uploads**: Users can easily upload images through a simple web interface.
- **Real-time Predictions**: The model processes images and provides predictions in real-time.
- **Visual Comparisons**: Directly compare the uploaded images with their prediction outputs.
## Further Help
If you encounter any issues or have questions about using the interface, please refer to the documentation on Hugging Face or submit an issue in the repository.
---
|
tarsssss/my_model | tarsssss | 2024-05-20T18:57:32Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-20T18:51:29Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/COMTEbis | matthieuzone | 2024-05-20T18:52:16Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T18:44:06Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/COMTEbis
<Gallery />
## Model description
These are matthieuzone/COMTEbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/COMTEbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
alexx1/llama3-omegle-lora-r16-16bit | alexx1 | 2024-05-20T18:51:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T18:48:51Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** alexx1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
1aurent/vit_small_patch14_224.dinobloom | 1aurent | 2024-05-20T18:38:08Z | 32 | 0 | timm | [
"timm",
"safetensors",
"feature-extraction",
"image-classification",
"arxiv:2404.05022",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2024-05-20T17:47:52Z | ---
tags:
- timm
- feature-extraction
- image-classification
library_name: timm
license: apache-2.0
---
# Model card for vit_small_patch14_224.dinobloom

## Model Details
- **Model Type:** Feature backbone
- **Model Stats:**
- Params: 22M (small)
- Image size: 224 x 224 x 3
- Patch size: 14 x 14 x 3
- **Repository:** [github.com:marrlab/DinoBloom](https://github.com/marrlab/DinoBloom)
- **Original Weights:** [Zenodo](https://zenodo.org/records/10908163)
- **License:** [Apache License 2.0](https://github.com/marrlab/DinoBloom/blob/main/LICENSE)
- **Papers:**
- [DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology](https://arxiv.org/abs/2404.05022)
## Model Usage
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
# get example histology image
img = Image.open(
urlopen(
"https://raw.githubusercontent.com/zxaoyou/segmentation_WBC/master/Dataset%201/001.bmp"
)
)
# load model from the hub
model = timm.create_model(
model_name="hf-hub:1aurent/vit_small_patch14_224.dinobloom",
pretrained=True,
).eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
data = transforms(img).unsqueeze(0) # input is a (batch_size, num_channels, img_size, img_size) shaped tensor
output = model(data) # output is a (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@misc{koch2024dinobloom,
title = {DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology},
author = {Valentin Koch and Sophia J. Wagner and Salome Kazeminia and Ece Sancar and Matthias Hehr and Julia Schnabel and Tingying Peng and Carsten Marr},
year = {2024},
eprint = {2404.05022},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
``` |
malteh14/Workshop_ViT | malteh14 | 2024-05-20T18:35:09Z | 192 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T18:32:37Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Workshop_ViT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Workshop_ViT
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0466
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0884 | 1.5385 | 100 | 0.0393 | 0.9925 |
| 0.0357 | 3.0769 | 200 | 0.0466 | 0.9925 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BugMaker-Boyan/text2sql_schema_item_classifier_bird | BugMaker-Boyan | 2024-05-20T18:33:52Z | 4 | 0 | transformers | [
"transformers",
"roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T18:10:24Z | ---
license: apache-2.0
---
|
fogs3d/gemma-1.1-7b-it-Q4_K_M-GGUF | fogs3d | 2024-05-20T18:31:37Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T18:31:20Z | ---
license: gemma
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# fogs3d/gemma-1.1-7b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-1.1-7b-it`](https://huggingface.co/google/gemma-1.1-7b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-1.1-7b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo fogs3d/gemma-1.1-7b-it-Q4_K_M-GGUF --model gemma-1.1-7b-it.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo fogs3d/gemma-1.1-7b-it-Q4_K_M-GGUF --model gemma-1.1-7b-it.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-1.1-7b-it.Q4_K_M.gguf -n 128
```
|
mii-llm/minerva-chat-v0.1-alpha-sft | mii-llm | 2024-05-20T18:30:09Z | 5,641 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"minerva",
"sft",
"conversational",
"it",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T17:52:59Z | ---
license: cc-by-nc-4.0
language:
- it
tags:
- minerva
- sft
---
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft
Minerva sft |
maneln/tinyllama2 | maneln | 2024-05-20T18:26:30Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T15:59:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Andres2024a/ppo-LunarLander-v2 | Andres2024a | 2024-05-20T18:26:09Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-20T18:25:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.24 +/- 13.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mullerjo/poca-SoccerTwos | Mullerjo | 2024-05-20T18:25:51Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-05-20T18:23:22Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Mullerjo/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
EthanRhys/Sage | EthanRhys | 2024-05-20T18:23:30Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-20T18:22:41Z | ---
license: openrail++
---
|
thesven/Llama-3-70B-Instruct-GGUF | thesven | 2024-05-20T18:18:26Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:llama3",
"region:us"
] | text-generation | 2024-05-19T10:16:53Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
## Quantization Details
This repo contains GGUF quantized versions of the Meta Llama 3 70B Instruct model.
The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on.
The repo contains quantizations in the following types:
Q4_0
Q4_1
Q4_K
Q4_K_S
Q4_K_M
Q2_K
Q3_K
Q3_K_S
Q3_K_XS
IQ2_K
IQ3_S
IQ3_XXS
IQ4_NL
IQ4_XS
IQ2_S
IQ2_XS
IQ1_S
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
BilalMuftuoglu/beit-base-patch16-224-75-fold2 | BilalMuftuoglu | 2024-05-20T18:16:38Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T17:56:18Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-75-fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9534883720930233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-75-fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.7091 | 0.5349 |
| No log | 2.0 | 4 | 0.6502 | 0.7209 |
| No log | 3.0 | 6 | 0.9193 | 0.6977 |
| No log | 4.0 | 8 | 0.7499 | 0.7442 |
| 0.6436 | 5.0 | 10 | 0.4527 | 0.8140 |
| 0.6436 | 6.0 | 12 | 0.4169 | 0.8372 |
| 0.6436 | 7.0 | 14 | 0.5773 | 0.7442 |
| 0.6436 | 8.0 | 16 | 0.4076 | 0.8605 |
| 0.6436 | 9.0 | 18 | 0.3939 | 0.8605 |
| 0.3863 | 10.0 | 20 | 0.4017 | 0.8605 |
| 0.3863 | 11.0 | 22 | 0.4918 | 0.8140 |
| 0.3863 | 12.0 | 24 | 0.2688 | 0.8372 |
| 0.3863 | 13.0 | 26 | 0.3884 | 0.8140 |
| 0.3863 | 14.0 | 28 | 0.3679 | 0.8140 |
| 0.2925 | 15.0 | 30 | 0.2802 | 0.8837 |
| 0.2925 | 16.0 | 32 | 0.2436 | 0.9070 |
| 0.2925 | 17.0 | 34 | 0.2337 | 0.9302 |
| 0.2925 | 18.0 | 36 | 0.3711 | 0.8140 |
| 0.2925 | 19.0 | 38 | 0.2372 | 0.9302 |
| 0.2289 | 20.0 | 40 | 0.2685 | 0.9535 |
| 0.2289 | 21.0 | 42 | 0.2610 | 0.9070 |
| 0.2289 | 22.0 | 44 | 0.3328 | 0.8372 |
| 0.2289 | 23.0 | 46 | 0.3479 | 0.8372 |
| 0.2289 | 24.0 | 48 | 0.2855 | 0.8837 |
| 0.219 | 25.0 | 50 | 0.2962 | 0.9070 |
| 0.219 | 26.0 | 52 | 0.4038 | 0.9070 |
| 0.219 | 27.0 | 54 | 0.3149 | 0.9070 |
| 0.219 | 28.0 | 56 | 0.3212 | 0.9070 |
| 0.219 | 29.0 | 58 | 0.4895 | 0.8605 |
| 0.1933 | 30.0 | 60 | 0.4335 | 0.8837 |
| 0.1933 | 31.0 | 62 | 0.3521 | 0.8372 |
| 0.1933 | 32.0 | 64 | 0.2960 | 0.8837 |
| 0.1933 | 33.0 | 66 | 0.4037 | 0.8372 |
| 0.1933 | 34.0 | 68 | 0.2913 | 0.8837 |
| 0.1892 | 35.0 | 70 | 0.3043 | 0.8837 |
| 0.1892 | 36.0 | 72 | 0.3602 | 0.9302 |
| 0.1892 | 37.0 | 74 | 0.3315 | 0.9302 |
| 0.1892 | 38.0 | 76 | 0.2674 | 0.9302 |
| 0.1892 | 39.0 | 78 | 0.2970 | 0.9535 |
| 0.15 | 40.0 | 80 | 0.2661 | 0.9535 |
| 0.15 | 41.0 | 82 | 0.2551 | 0.8837 |
| 0.15 | 42.0 | 84 | 0.2467 | 0.9302 |
| 0.15 | 43.0 | 86 | 0.3008 | 0.9535 |
| 0.15 | 44.0 | 88 | 0.3265 | 0.9302 |
| 0.1238 | 45.0 | 90 | 0.2668 | 0.9302 |
| 0.1238 | 46.0 | 92 | 0.2574 | 0.9302 |
| 0.1238 | 47.0 | 94 | 0.2498 | 0.9535 |
| 0.1238 | 48.0 | 96 | 0.3319 | 0.8837 |
| 0.1238 | 49.0 | 98 | 0.2358 | 0.9302 |
| 0.1063 | 50.0 | 100 | 0.2015 | 0.9302 |
| 0.1063 | 51.0 | 102 | 0.2171 | 0.9302 |
| 0.1063 | 52.0 | 104 | 0.3119 | 0.9302 |
| 0.1063 | 53.0 | 106 | 0.2674 | 0.9070 |
| 0.1063 | 54.0 | 108 | 0.3076 | 0.8837 |
| 0.1112 | 55.0 | 110 | 0.3182 | 0.8837 |
| 0.1112 | 56.0 | 112 | 0.3371 | 0.9070 |
| 0.1112 | 57.0 | 114 | 0.3540 | 0.9070 |
| 0.1112 | 58.0 | 116 | 0.4058 | 0.9070 |
| 0.1112 | 59.0 | 118 | 0.4013 | 0.9070 |
| 0.1128 | 60.0 | 120 | 0.3309 | 0.9302 |
| 0.1128 | 61.0 | 122 | 0.3272 | 0.9302 |
| 0.1128 | 62.0 | 124 | 0.4012 | 0.9070 |
| 0.1128 | 63.0 | 126 | 0.5794 | 0.8605 |
| 0.1128 | 64.0 | 128 | 0.3881 | 0.9070 |
| 0.1168 | 65.0 | 130 | 0.2990 | 0.9070 |
| 0.1168 | 66.0 | 132 | 0.3018 | 0.8837 |
| 0.1168 | 67.0 | 134 | 0.2561 | 0.9302 |
| 0.1168 | 68.0 | 136 | 0.2921 | 0.9302 |
| 0.1168 | 69.0 | 138 | 0.3258 | 0.9070 |
| 0.0846 | 70.0 | 140 | 0.2925 | 0.9302 |
| 0.0846 | 71.0 | 142 | 0.3073 | 0.9302 |
| 0.0846 | 72.0 | 144 | 0.3318 | 0.9302 |
| 0.0846 | 73.0 | 146 | 0.3427 | 0.9302 |
| 0.0846 | 74.0 | 148 | 0.3588 | 0.9070 |
| 0.0845 | 75.0 | 150 | 0.3939 | 0.9070 |
| 0.0845 | 76.0 | 152 | 0.3774 | 0.9070 |
| 0.0845 | 77.0 | 154 | 0.3746 | 0.9070 |
| 0.0845 | 78.0 | 156 | 0.4073 | 0.8837 |
| 0.0845 | 79.0 | 158 | 0.3886 | 0.9070 |
| 0.0885 | 80.0 | 160 | 0.3765 | 0.9070 |
| 0.0885 | 81.0 | 162 | 0.3977 | 0.9070 |
| 0.0885 | 82.0 | 164 | 0.3864 | 0.9070 |
| 0.0885 | 83.0 | 166 | 0.3809 | 0.9070 |
| 0.0885 | 84.0 | 168 | 0.4492 | 0.8605 |
| 0.0859 | 85.0 | 170 | 0.5479 | 0.8605 |
| 0.0859 | 86.0 | 172 | 0.5372 | 0.8605 |
| 0.0859 | 87.0 | 174 | 0.4512 | 0.8605 |
| 0.0859 | 88.0 | 176 | 0.3930 | 0.9070 |
| 0.0859 | 89.0 | 178 | 0.3842 | 0.9302 |
| 0.0764 | 90.0 | 180 | 0.3808 | 0.9302 |
| 0.0764 | 91.0 | 182 | 0.3787 | 0.9302 |
| 0.0764 | 92.0 | 184 | 0.3833 | 0.9070 |
| 0.0764 | 93.0 | 186 | 0.3912 | 0.9070 |
| 0.0764 | 94.0 | 188 | 0.3888 | 0.8837 |
| 0.0727 | 95.0 | 190 | 0.3817 | 0.8837 |
| 0.0727 | 96.0 | 192 | 0.3708 | 0.9070 |
| 0.0727 | 97.0 | 194 | 0.3640 | 0.9070 |
| 0.0727 | 98.0 | 196 | 0.3613 | 0.9302 |
| 0.0727 | 99.0 | 198 | 0.3607 | 0.9302 |
| 0.069 | 100.0 | 200 | 0.3605 | 0.9302 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2 | Zoyd | 2024-05-20T18:09:39Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T17:54:43Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
tonyassi/sales-prediction | tonyassi | 2024-05-20T18:08:10Z | 0 | 4 | null | [
"safetensors",
"Image Regression",
"dataset:tonyassi/clothing-sales-ds",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T18:01:45Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- Image Regression
datasets:
- "tonyassi/clothing-sales-ds"
metrics:
- accuracy
model-index:
- name: "sales-prediction"
results: []
---
# sales-prediction
## Image Regression Model
This model was trained with [Image Regression Model Trainer](https://github.com/TonyAssi/ImageRegression/tree/main). It takes an image as input and outputs a float value.
```python
from ImageRegression import predict
predict(repo_id='tonyassi/sales-prediction',image_path='image.jpg')
```
---
## Dataset
Dataset: tonyassi/clothing-sales-ds\
Value Column: 'sales'\
Train Test Split: 0.2
---
## Training
Base Model: [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224)\
Epochs: 10\
Learning Rate: 0.0001
---
## Usage
### Download
```bash
git clone https://github.com/TonyAssi/ImageRegression.git
cd ImageRegression
```
### Installation
```bash
pip install -r requirements.txt
```
### Import
```python
from ImageRegression import train_model, upload_model, predict
```
### Inference (Prediction)
- **repo_id** 🤗 repo id of the model
- **image_path** path to image
```python
predict(repo_id='tonyassi/sales-prediction',
image_path='image.jpg')
```
The first time this function is called it'll download the safetensor model. Subsequent function calls will run faster.
### Train Model
- **dataset_id** 🤗 dataset id
- **value_column_name** column name of prediction values in dataset
- **test_split** test split of the train/test split
- **output_dir** the directory where the checkpoints will be saved
- **num_train_epochs** training epochs
- **learning_rate** learning rate
```python
train_model(dataset_id='tonyassi/clothing-sales-ds',
value_column_name='sales',
test_split=0.2,
output_dir='./results',
num_train_epochs=10,
learning_rate=0.0001)
```
The trainer will save the checkpoints in the output_dir location. The model.safetensors are the trained weights you'll use for inference (predicton).
### Upload Model
This function will upload your model to the 🤗 Hub.
- **model_id** the name of the model id
- **token** go [here](https://huggingface.co/settings/tokens) to create a new 🤗 token
- **checkpoint_dir** checkpoint folder that will be uploaded
```python
upload_model(model_id='sales-prediction',
token='YOUR_HF_TOKEN',
checkpoint_dir='./results/checkpoint-940')
``` |
farzanrahmani/AriaBERT_finetuned_digimag_Epoch_3_lr_2e_5_freezed | farzanrahmani | 2024-05-20T18:02:08Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T18:01:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2 | Zoyd | 2024-05-20T17:58:19Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T17:21:47Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2 | Zoyd | 2024-05-20T17:58:14Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T16:48:57Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2 | Zoyd | 2024-05-20T17:58:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T16:16:02Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
ryandono/fine-tune-paligema | ryandono | 2024-05-20T17:58:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T17:55:45Z | ---
license: apache-2.0
---
|
Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2 | Zoyd | 2024-05-20T17:57:49Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T14:38:14Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2 | Zoyd | 2024-05-20T17:57:46Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T14:05:39Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
0xlexor/genesys | 0xlexor | 2024-05-20T17:57:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | 2024-05-20T17:53:16Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2 | Zoyd | 2024-05-20T17:57:33Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T13:33:15Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2 | Zoyd | 2024-05-20T17:57:11Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T12:29:03Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-2_5bpw_exl2)**</center> | <center>11199 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_0bpw_exl2)**</center> | <center>13186 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_5bpw_exl2)**</center> | <center>15178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-3_75bpw_exl2)**</center> | <center>16182 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_0bpw_exl2)**</center> | <center>17170 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-4_25bpw_exl2)**</center> | <center>18176 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-5_0bpw_exl2)**</center> | <center>21147 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_0bpw_exl2)**</center> | <center>25182 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-6_5bpw_exl2)**</center> | <center>27230 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-32K-8_0bpw_exl2)**</center> | <center>29577 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
BilalMuftuoglu/beit-base-patch16-224-75-fold1 | BilalMuftuoglu | 2024-05-20T17:56:11Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T17:35:46Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-75-fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-75-fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2641
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.0987 | 0.3023 |
| No log | 2.0 | 4 | 0.6630 | 0.6977 |
| No log | 3.0 | 6 | 0.8342 | 0.6977 |
| No log | 4.0 | 8 | 0.6752 | 0.6977 |
| 0.7768 | 5.0 | 10 | 0.5408 | 0.7209 |
| 0.7768 | 6.0 | 12 | 0.7252 | 0.6977 |
| 0.7768 | 7.0 | 14 | 0.5609 | 0.7209 |
| 0.7768 | 8.0 | 16 | 0.7345 | 0.6977 |
| 0.7768 | 9.0 | 18 | 0.4614 | 0.7674 |
| 0.4249 | 10.0 | 20 | 0.4434 | 0.8372 |
| 0.4249 | 11.0 | 22 | 0.7552 | 0.7442 |
| 0.4249 | 12.0 | 24 | 0.4142 | 0.7674 |
| 0.4249 | 13.0 | 26 | 0.7183 | 0.7442 |
| 0.4249 | 14.0 | 28 | 0.5591 | 0.7907 |
| 0.3506 | 15.0 | 30 | 0.4363 | 0.6977 |
| 0.3506 | 16.0 | 32 | 0.5738 | 0.7907 |
| 0.3506 | 17.0 | 34 | 0.4286 | 0.8140 |
| 0.3506 | 18.0 | 36 | 0.4200 | 0.8140 |
| 0.3506 | 19.0 | 38 | 0.6514 | 0.7442 |
| 0.3434 | 20.0 | 40 | 0.4190 | 0.7907 |
| 0.3434 | 21.0 | 42 | 0.6220 | 0.8140 |
| 0.3434 | 22.0 | 44 | 0.6334 | 0.7907 |
| 0.3434 | 23.0 | 46 | 0.4487 | 0.8372 |
| 0.3434 | 24.0 | 48 | 0.4960 | 0.8605 |
| 0.2498 | 25.0 | 50 | 0.4179 | 0.8605 |
| 0.2498 | 26.0 | 52 | 0.3221 | 0.8605 |
| 0.2498 | 27.0 | 54 | 0.4776 | 0.8372 |
| 0.2498 | 28.0 | 56 | 0.5756 | 0.8605 |
| 0.2498 | 29.0 | 58 | 0.5444 | 0.8372 |
| 0.2461 | 30.0 | 60 | 0.3973 | 0.8605 |
| 0.2461 | 31.0 | 62 | 0.3672 | 0.8605 |
| 0.2461 | 32.0 | 64 | 0.4071 | 0.8837 |
| 0.2461 | 33.0 | 66 | 0.4678 | 0.7674 |
| 0.2461 | 34.0 | 68 | 0.2641 | 0.9302 |
| 0.2279 | 35.0 | 70 | 0.5551 | 0.8372 |
| 0.2279 | 36.0 | 72 | 0.2727 | 0.9302 |
| 0.2279 | 37.0 | 74 | 0.3312 | 0.8837 |
| 0.2279 | 38.0 | 76 | 0.7485 | 0.7907 |
| 0.2279 | 39.0 | 78 | 0.6407 | 0.8605 |
| 0.183 | 40.0 | 80 | 0.5420 | 0.8372 |
| 0.183 | 41.0 | 82 | 0.7364 | 0.8605 |
| 0.183 | 42.0 | 84 | 0.4141 | 0.8605 |
| 0.183 | 43.0 | 86 | 0.5461 | 0.7907 |
| 0.183 | 44.0 | 88 | 0.3438 | 0.8605 |
| 0.1658 | 45.0 | 90 | 0.3322 | 0.9302 |
| 0.1658 | 46.0 | 92 | 0.3463 | 0.9302 |
| 0.1658 | 47.0 | 94 | 0.6066 | 0.8605 |
| 0.1658 | 48.0 | 96 | 0.6259 | 0.8605 |
| 0.1658 | 49.0 | 98 | 0.4909 | 0.8372 |
| 0.1555 | 50.0 | 100 | 0.6022 | 0.7907 |
| 0.1555 | 51.0 | 102 | 0.5234 | 0.8372 |
| 0.1555 | 52.0 | 104 | 0.4164 | 0.8837 |
| 0.1555 | 53.0 | 106 | 0.3893 | 0.8605 |
| 0.1555 | 54.0 | 108 | 0.3774 | 0.8837 |
| 0.1487 | 55.0 | 110 | 0.7532 | 0.8372 |
| 0.1487 | 56.0 | 112 | 0.7141 | 0.8605 |
| 0.1487 | 57.0 | 114 | 0.4197 | 0.9070 |
| 0.1487 | 58.0 | 116 | 0.6816 | 0.7442 |
| 0.1487 | 59.0 | 118 | 0.5384 | 0.8140 |
| 0.1349 | 60.0 | 120 | 0.4971 | 0.8605 |
| 0.1349 | 61.0 | 122 | 0.4601 | 0.8837 |
| 0.1349 | 62.0 | 124 | 0.4740 | 0.8372 |
| 0.1349 | 63.0 | 126 | 0.5386 | 0.8140 |
| 0.1349 | 64.0 | 128 | 0.3376 | 0.9070 |
| 0.128 | 65.0 | 130 | 0.3905 | 0.9070 |
| 0.128 | 66.0 | 132 | 0.3841 | 0.9302 |
| 0.128 | 67.0 | 134 | 0.3567 | 0.8605 |
| 0.128 | 68.0 | 136 | 0.3985 | 0.8372 |
| 0.128 | 69.0 | 138 | 0.4165 | 0.8372 |
| 0.0875 | 70.0 | 140 | 0.4346 | 0.8605 |
| 0.0875 | 71.0 | 142 | 0.4497 | 0.8372 |
| 0.0875 | 72.0 | 144 | 0.4353 | 0.8837 |
| 0.0875 | 73.0 | 146 | 0.4276 | 0.8837 |
| 0.0875 | 74.0 | 148 | 0.4010 | 0.8837 |
| 0.0932 | 75.0 | 150 | 0.3958 | 0.9070 |
| 0.0932 | 76.0 | 152 | 0.3604 | 0.9070 |
| 0.0932 | 77.0 | 154 | 0.3427 | 0.8837 |
| 0.0932 | 78.0 | 156 | 0.3417 | 0.8837 |
| 0.0932 | 79.0 | 158 | 0.3438 | 0.9070 |
| 0.0943 | 80.0 | 160 | 0.3756 | 0.9302 |
| 0.0943 | 81.0 | 162 | 0.4077 | 0.9302 |
| 0.0943 | 82.0 | 164 | 0.4129 | 0.9302 |
| 0.0943 | 83.0 | 166 | 0.4304 | 0.9302 |
| 0.0943 | 84.0 | 168 | 0.4156 | 0.9302 |
| 0.0753 | 85.0 | 170 | 0.4088 | 0.9070 |
| 0.0753 | 86.0 | 172 | 0.4090 | 0.8837 |
| 0.0753 | 87.0 | 174 | 0.4076 | 0.9070 |
| 0.0753 | 88.0 | 176 | 0.4273 | 0.9070 |
| 0.0753 | 89.0 | 178 | 0.4367 | 0.9070 |
| 0.0846 | 90.0 | 180 | 0.4490 | 0.9070 |
| 0.0846 | 91.0 | 182 | 0.4448 | 0.8837 |
| 0.0846 | 92.0 | 184 | 0.4406 | 0.8837 |
| 0.0846 | 93.0 | 186 | 0.4393 | 0.8837 |
| 0.0846 | 94.0 | 188 | 0.4370 | 0.8837 |
| 0.0865 | 95.0 | 190 | 0.4330 | 0.8837 |
| 0.0865 | 96.0 | 192 | 0.4293 | 0.8837 |
| 0.0865 | 97.0 | 194 | 0.4240 | 0.8837 |
| 0.0865 | 98.0 | 196 | 0.4177 | 0.8837 |
| 0.0865 | 99.0 | 198 | 0.4144 | 0.8837 |
| 0.1019 | 100.0 | 200 | 0.4135 | 0.8837 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF | bunnycore | 2024-05-20T17:52:44Z | 3 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T17:52:27Z | ---
license: llama2
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Blackbird-Llama-3-8B`](https://huggingface.co/bunnycore/Blackbird-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Blackbird-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF --model blackbird-llama-3-8b.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo bunnycore/Blackbird-Llama-3-8B-Q5_K_M-GGUF --model blackbird-llama-3-8b.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m blackbird-llama-3-8b.Q5_K_M.gguf -n 128
```
|
sravan-gorugantu/model2024-05-20 | sravan-gorugantu | 2024-05-20T17:50:58Z | 162 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-20T12:37:07Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: model2024-05-20
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.96875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model2024-05-20
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0759
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1694 | 1.0 | 321 | 0.1613 | 0.9408 |
| 0.1271 | 2.0 | 642 | 0.1178 | 0.9530 |
| 0.0922 | 3.0 | 963 | 0.1076 | 0.9568 |
| 0.0788 | 4.0 | 1284 | 0.0731 | 0.9691 |
| 0.0766 | 5.0 | 1605 | 0.0759 | 0.9688 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
matthieuzone/BEAUFORTbis | matthieuzone | 2024-05-20T17:49:48Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T17:36:44Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/BEAUFORTbis
<Gallery />
## Model description
These are matthieuzone/BEAUFORTbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/BEAUFORTbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MSParkDev/SingSeqBERT-Katchers | MSParkDev | 2024-05-20T17:44:51Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T03:46:40Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SingSeqBERT-Katchers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SingSeqBERT-Katchers
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5238
- Accuracy: 0.7898
- F1: 0.7893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4927 | 1.0 | 2522 | 0.5391 | 0.7693 | 0.7687 |
| 0.4503 | 2.0 | 5044 | 0.5258 | 0.7911 | 0.7904 |
| 0.4146 | 3.0 | 7566 | 0.5238 | 0.7898 | 0.7893 |
| 0.3882 | 4.0 | 10088 | 0.5512 | 0.7950 | 0.7944 |
| 0.3633 | 5.0 | 12610 | 0.6592 | 0.7892 | 0.7884 |
| 0.3638 | 6.0 | 15132 | 0.8374 | 0.7811 | 0.7796 |
| 0.3212 | 7.0 | 17654 | 0.8621 | 0.7841 | 0.7833 |
| 0.2878 | 8.0 | 20176 | 0.9864 | 0.7779 | 0.7767 |
| 0.2407 | 9.0 | 22698 | 1.0765 | 0.7832 | 0.7824 |
| 0.2051 | 10.0 | 25220 | 1.1017 | 0.7869 | 0.7864 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yashpratap/apa-scoring | yashpratap | 2024-05-20T17:40:52Z | 0 | 0 | null | [
"en",
"dataset:librispeech_asr",
"license:mit",
"region:us"
] | null | 2024-05-20T17:36:56Z | ---
license: mit
datasets:
- librispeech_asr
language:
- en
--- |
BilalMuftuoglu/beit-base-patch16-224-65-fold5 | BilalMuftuoglu | 2024-05-20T17:32:01Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T17:02:53Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-65-fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9014084507042254
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-65-fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4937
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 3 | 0.7742 | 0.4507 |
| No log | 1.8462 | 6 | 0.7185 | 0.4930 |
| No log | 2.7692 | 9 | 0.6625 | 0.5634 |
| 0.7338 | 4.0 | 13 | 0.6136 | 0.7183 |
| 0.7338 | 4.9231 | 16 | 0.5974 | 0.6479 |
| 0.7338 | 5.8462 | 19 | 0.5771 | 0.6338 |
| 0.6191 | 6.7692 | 22 | 0.5400 | 0.7042 |
| 0.6191 | 8.0 | 26 | 0.5127 | 0.7183 |
| 0.6191 | 8.9231 | 29 | 0.5341 | 0.7324 |
| 0.5723 | 9.8462 | 32 | 0.4877 | 0.7887 |
| 0.5723 | 10.7692 | 35 | 0.6659 | 0.6197 |
| 0.5723 | 12.0 | 39 | 0.5790 | 0.6761 |
| 0.5161 | 12.9231 | 42 | 0.5001 | 0.7606 |
| 0.5161 | 13.8462 | 45 | 0.4195 | 0.8310 |
| 0.5161 | 14.7692 | 48 | 0.4806 | 0.7746 |
| 0.4982 | 16.0 | 52 | 0.4013 | 0.8028 |
| 0.4982 | 16.9231 | 55 | 0.4189 | 0.8028 |
| 0.4982 | 17.8462 | 58 | 0.4018 | 0.8310 |
| 0.438 | 18.7692 | 61 | 0.5230 | 0.7183 |
| 0.438 | 20.0 | 65 | 0.4768 | 0.7465 |
| 0.438 | 20.9231 | 68 | 0.4428 | 0.7887 |
| 0.4641 | 21.8462 | 71 | 0.4122 | 0.8169 |
| 0.4641 | 22.7692 | 74 | 0.4537 | 0.7746 |
| 0.4641 | 24.0 | 78 | 0.3838 | 0.8310 |
| 0.308 | 24.9231 | 81 | 0.4586 | 0.8028 |
| 0.308 | 25.8462 | 84 | 0.5623 | 0.8028 |
| 0.308 | 26.7692 | 87 | 0.4050 | 0.8310 |
| 0.2766 | 28.0 | 91 | 0.3860 | 0.8169 |
| 0.2766 | 28.9231 | 94 | 0.4062 | 0.8169 |
| 0.2766 | 29.8462 | 97 | 0.6191 | 0.8169 |
| 0.288 | 30.7692 | 100 | 0.6076 | 0.7746 |
| 0.288 | 32.0 | 104 | 0.5300 | 0.8169 |
| 0.288 | 32.9231 | 107 | 0.6178 | 0.7606 |
| 0.2676 | 33.8462 | 110 | 0.4465 | 0.8451 |
| 0.2676 | 34.7692 | 113 | 0.5893 | 0.7606 |
| 0.2676 | 36.0 | 117 | 0.4782 | 0.8169 |
| 0.2306 | 36.9231 | 120 | 0.4946 | 0.8310 |
| 0.2306 | 37.8462 | 123 | 0.4534 | 0.8451 |
| 0.2306 | 38.7692 | 126 | 0.4603 | 0.8451 |
| 0.2095 | 40.0 | 130 | 0.5839 | 0.8028 |
| 0.2095 | 40.9231 | 133 | 0.4536 | 0.8310 |
| 0.2095 | 41.8462 | 136 | 0.4617 | 0.8592 |
| 0.2095 | 42.7692 | 139 | 0.4531 | 0.8592 |
| 0.2171 | 44.0 | 143 | 0.4325 | 0.8732 |
| 0.2171 | 44.9231 | 146 | 0.4732 | 0.8592 |
| 0.2171 | 45.8462 | 149 | 0.4779 | 0.8592 |
| 0.1686 | 46.7692 | 152 | 0.4841 | 0.8451 |
| 0.1686 | 48.0 | 156 | 0.5690 | 0.8310 |
| 0.1686 | 48.9231 | 159 | 0.5477 | 0.8451 |
| 0.1644 | 49.8462 | 162 | 0.5844 | 0.8310 |
| 0.1644 | 50.7692 | 165 | 0.5818 | 0.8310 |
| 0.1644 | 52.0 | 169 | 0.4674 | 0.8451 |
| 0.1915 | 52.9231 | 172 | 0.5320 | 0.8732 |
| 0.1915 | 53.8462 | 175 | 0.4933 | 0.8451 |
| 0.1915 | 54.7692 | 178 | 0.5090 | 0.8592 |
| 0.1561 | 56.0 | 182 | 0.4864 | 0.8451 |
| 0.1561 | 56.9231 | 185 | 0.4652 | 0.8732 |
| 0.1561 | 57.8462 | 188 | 0.5113 | 0.8592 |
| 0.1298 | 58.7692 | 191 | 0.4803 | 0.8732 |
| 0.1298 | 60.0 | 195 | 0.4794 | 0.8451 |
| 0.1298 | 60.9231 | 198 | 0.4743 | 0.8451 |
| 0.1467 | 61.8462 | 201 | 0.4739 | 0.8592 |
| 0.1467 | 62.7692 | 204 | 0.5211 | 0.8451 |
| 0.1467 | 64.0 | 208 | 0.5315 | 0.8592 |
| 0.1363 | 64.9231 | 211 | 0.5182 | 0.8592 |
| 0.1363 | 65.8462 | 214 | 0.5160 | 0.8451 |
| 0.1363 | 66.7692 | 217 | 0.6170 | 0.8169 |
| 0.154 | 68.0 | 221 | 0.4857 | 0.8592 |
| 0.154 | 68.9231 | 224 | 0.4763 | 0.8592 |
| 0.154 | 69.8462 | 227 | 0.4937 | 0.9014 |
| 0.141 | 70.7692 | 230 | 0.5038 | 0.8873 |
| 0.141 | 72.0 | 234 | 0.5026 | 0.8592 |
| 0.141 | 72.9231 | 237 | 0.5019 | 0.8592 |
| 0.1166 | 73.8462 | 240 | 0.5028 | 0.8592 |
| 0.1166 | 74.7692 | 243 | 0.5226 | 0.8592 |
| 0.1166 | 76.0 | 247 | 0.5295 | 0.8732 |
| 0.117 | 76.9231 | 250 | 0.5073 | 0.8732 |
| 0.117 | 77.8462 | 253 | 0.5081 | 0.8732 |
| 0.117 | 78.7692 | 256 | 0.5036 | 0.8592 |
| 0.1037 | 80.0 | 260 | 0.5038 | 0.8451 |
| 0.1037 | 80.9231 | 263 | 0.5072 | 0.8451 |
| 0.1037 | 81.8462 | 266 | 0.5081 | 0.8451 |
| 0.1037 | 82.7692 | 269 | 0.5062 | 0.8310 |
| 0.1085 | 84.0 | 273 | 0.5144 | 0.8451 |
| 0.1085 | 84.9231 | 276 | 0.5208 | 0.8592 |
| 0.1085 | 85.8462 | 279 | 0.5248 | 0.8592 |
| 0.0939 | 86.7692 | 282 | 0.5301 | 0.8592 |
| 0.0939 | 88.0 | 286 | 0.5357 | 0.8451 |
| 0.0939 | 88.9231 | 289 | 0.5398 | 0.8451 |
| 0.0962 | 89.8462 | 292 | 0.5434 | 0.8451 |
| 0.0962 | 90.7692 | 295 | 0.5455 | 0.8451 |
| 0.0962 | 92.0 | 299 | 0.5448 | 0.8451 |
| 0.1131 | 92.3077 | 300 | 0.5446 | 0.8451 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
uiyong/kospi_report_model_0517 | uiyong | 2024-05-20T17:31:22Z | 79 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-20T17:10:51Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sgarrett/test_4 | sgarrett | 2024-05-20T17:30:21Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:nferruz/ProtGPT2",
"base_model:finetune:nferruz/ProtGPT2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T17:22:16Z | ---
license: apache-2.0
base_model: nferruz/ProtGPT2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_output_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_2
This model is a fine-tuned version of [nferruz/ProtGPT2](https://huggingface.co/nferruz/ProtGPT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.1877
- Accuracy: 0.4684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200.0
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
HackerMonica/nllb-200-distilled-600M-en-zh_CN | HackerMonica | 2024-05-20T17:26:30Z | 133 | 2 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"en",
"zh",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-05-20T15:41:44Z | ---
license: cc-by-nc-4.0
language:
- en
- zh
metrics:
- bleu
pipeline_tag: translation
---
# Model Documentation: English to Simplified Chinese Translation with NLLB-200-distilled-600M
## Model Overview
This document describes a machine translation model fine-tuned from Meta's NLLB-200-distilled-600M for translating from English to Simplified Chinese. The model, hosted at `HackerMonica/nllb-200-distilled-600M-en-zh_CN`, utilizes a distilled version of the NLLB-200 model which has been specifically optimized for translation tasks between the English and Simplified Chinese languages.
## Dependencies
The model requires the `transformers` library by Hugging Face. Ensure that you have the library installed:
```bash
pip install transformers
```
## Setup
Import necessary classes from the `transformers` library:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
```
Initialize the model and tokenizer:
```python
model = AutoModelForSeq2SeqLM.from_pretrained('HackerMonica/nllb-200-distilled-600M-en-zh_CN')
tokenizer = AutoTokenizer.from_pretrained('HackerMonica/nllb-200-distilled-600M-en-zh_CN')
```
## Usage
To use the model for translating text, use python code below to translate text from English to Simplified Chinese:
```python
def translate(text):
inputs = tokenizer(text, return_tensors="pt").to("cuda")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["zho_Hans"], max_length=300
)
translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
return translation
```
|
MrBlackSheep/BOOBS-REVpruned | MrBlackSheep | 2024-05-20T17:26:20Z | 6 | 0 | diffusers | [
"diffusers",
"checkpoint",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-06T12:20:41Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- checkpoint
---
### Model Description
A pruned merge of BOOBS MIX checkpoint and RevAnimated v1.2.2-EOL https://huggingface.co/s6yx/ReV_Animated
- **Developed by:** MrBlackSheep
- **Model type:** Checkpoint **(Pruned version)**
- **License:** creativeml-openrail-m
 |
tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF | tezcan | 2024-05-20T17:24:35Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"tr",
"license:llama3",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T17:24:20Z | ---
language:
- tr
license: llama3
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Kocdigital-LLM-8b-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge TR
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 44.03
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag TR
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 46.73
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU TR
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.11
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA TR
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: acc
value: 48.21
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande TR
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 54.98
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k TR
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.78
name: accuracy
---
# tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`KOCDIGITAL/Kocdigital-LLM-8b-v0.1`](https://huggingface.co/KOCDIGITAL/Kocdigital-LLM-8b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/KOCDIGITAL/Kocdigital-LLM-8b-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF --model kocdigital-llm-8b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo tezcan/Kocdigital-LLM-8b-v0.1-Q4_K_M-GGUF --model kocdigital-llm-8b-v0.1.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m kocdigital-llm-8b-v0.1.Q4_K_M.gguf -n 128
```
|
feysahin/Reinforce-CartPole-v1 | feysahin | 2024-05-20T17:24:22Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-20T17:24:11Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
maneln/tiny-llama | maneln | 2024-05-20T17:22:53Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T17:00:25Z | ---
license: apache-2.0
---
|
quangantang/Mistral-7B-Instruct-v0.2-GPTQ-Brief-Hospital-Course | quangantang | 2024-05-20T17:20:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T17:18:06Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: Mistral-7B-Instruct-v0.2-GPTQ-Brief-Hospital-Course
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-GPTQ-Brief-Hospital-Course
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
The model is part of the work submitted to the Discharge Me! Shared Task instruction-fintuned for generating the 'Brief Hospital Course' section in the discharge summary.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Mouwiya/image-model-demo2 | Mouwiya | 2024-05-20T17:16:24Z | 85 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-to-text | 2024-04-19T15:03:24Z | ---
library_name: transformers
pipeline_tag: image-to-text
---
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
farzanrahmani/AriaBERT_finetuned_digimag_Epoch_3_lr_2e_5_unfreezed | farzanrahmani | 2024-05-20T17:16:20Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T17:15:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shruthikaa/BigBird_Classification | Shruthikaa | 2024-05-20T17:14:50Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"big_bird",
"text-classification",
"generated_from_trainer",
"base_model:google/bigbird-roberta-base",
"base_model:finetune:google/bigbird-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T13:16:38Z | ---
license: apache-2.0
base_model: google/bigbird-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BigBird_Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BigBird_Classification
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4176
- Accuracy: 0.813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6106 | 1.0 | 625 | 0.4582 | 0.785 |
| 0.4833 | 2.0 | 1250 | 0.4176 | 0.813 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
kishorea/P8_Llama3_base | kishorea | 2024-05-20T17:12:11Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-02T18:36:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rendulic/setfit-ll-MiniLM-L6-v2-email-fraud-2024-05-18 | rendulic | 2024-05-20T17:12:06Z | 11 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | text-classification | 2024-05-18T22:50:04Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/all-MiniLM-L6-v2
metrics:
- accuracy
widget:
- text: "James\n\n\nFrom: The Desk of Ajibola James\nSenior Manager: Pacific International\
\ Bank PLC.\n 40 Marina Street\n Lagos\n\nDear friend,\n\
\nFirst, I must solicit your confidence in this transaction. This is by virtue\
\ \nof its nature being utterly confidential and top secret. We have decided\
\ to \ncontact you due to the urgency of this transaction, as we have been reliably\
\ \ninformed of your discretness, trustworthy and ability to carry out legitimate\
\ \nbusiness.\n\nTHE PROPOSITION: An American, Mr. Shaline Adam, an Oil merchant\
\ with the \nFederal Government of Nigeria, until his death with his whole family\
\ on an \nEgyptAir Flight #990, which crashed into the Atlantic Ocean on October\
\ 31, \n1999, banked with us at Pacific International Bank Plc Lagos and had\
\ a \nclosing balance of US$3.5,000,000.00 (Three Milion Five Hundred Thousand\
\ \nUnited States Dollars Only) as at the end of September, 2000. Attached here\
\ \nis a CNN webpage on the unfortunate crash in 1999: \nhttp://www.cnn.com/US/9911/02/egyptair990.list/index.html\n\
\nValuable efforts have been made by the bank to get in touch with any of the\
\ \nAdam's family or relative, but to no avail. His Next of Kin was his wife that\
\ \nwas involved in the crash so for now there is no trace of his family.\n\n\
It is because of the perceived impossibility of locating a relative of the \n\
Shaline Adam's family (since all his family are dead) that the bank is making\
\ \nplans to ceed the said amount to the Defence Ministry for the procurement\
\ of \nweapons of war. In order to avert this ugly situation, few of my colleagues\
\ \nand I have decided to contact you and seek your permission to have you stand\
\ \nas a relative to Late Shaline Adam so that the total amount of US$3.5 Million\
\ \nDollars will be processed and released in your favour into your personal \n\
account.\n\nAll documents and proofs to enable you get this fund will be carefully\
\ \nworked out. We have secured from the probate, an order of Madamus, to locate\
\ \nany of the deceased beneficiary. Be rest assured that this transaction is\
\ \nrisk-free. Also, your share for offering to assist us and provide the \nreceiving\
\ account will be 10% of the total sum, while 90% will be for my \ncolleagues\
\ and I, which also would be in your account for safe custody and \nprobable future\
\ investment purpose in your country.\n\nAs soon as we receive an acknowledgement\
\ of your willingness to accept our \nproposal, we shall furnish you with further\
\ details as we concluded all \narrangements to have the money transferred to\
\ you within 7 working days from \nthe date of commencement.\n\nIf this proposal\
\ is acceptable to you, then furnish us with your most \nconfidential telephone\
\ and fax numbers at which time an application for the \nfund release will\
\ be forwarded in your favour.\n\nThank you in advance for your anticipated cooperation.\n\
\nRegards,\n\nAjibola James\n\nAlternative mail:[email protected]"
- text: "My Compliment\n\n\nFrom: Dr. Rasheed S. Abubakar,\n\nDear Friend,\n\nMy Compliment\
\ to you,\n\nI guess this letter may come to you as a surprise since I had no\
\ \nprevious correspondence with you.\n\nI am sending you this mail on behalf\
\ of the chairman tender board of \nIndependent National Electoral Commission\
\ (INEC) MR. SETTLEY DAZE. We \ngot your contact in our search for a reliable\
\ person to handle a very \nconfidential transaction involving the transfer of\
\ Forty Nine Million, \nFive Hundred Thosand United States Dollars US$49.5Million.\n\
\nThe above fund is not connected with arms, drugs or money laundering. \nIt is\
\ the product of an over invoiced Contract awarded in 2001 by INEC \nto a foreign\
\ company for the construction of high rise estate in the \nfederal capital territory.\n\
\nThe contract has long been executed and payment of the actual contract \namount\
\ has been paid to the foreign contractor leaving the balance, \nwhich my colleague\
\ and I now want to transfer out of Nigeria into a \nreliable foreign account\
\ for our personal use.\n\nAs civil servants we are not allowed to run foreign\
\ accounts. Hence we \nhave chosen you to front and support us as the beneficiary\
\ to be paid. \nIf you are interested in the proposal kindly get back to me by\
\ sending \nme your letter of acceptance along with your direct telephone and\
\ fax \nnumbers, For your support and partnership, please reply me to negotiate\
\ \nyour fees or the percentage you wish to be paid when the funds arrive \nyour\
\ bank account. \n\nFurther details about this transaction will be discussed in\
\ the \nsubsequent correspondence. Note also that the particular nature of your\
\ \nbusiness is irrelevant to this transaction and all local contacts and \narrangements\
\ are in place for a smooth and successful conclusion of \nthis transaction.\n\
\nBe informed that we are aware of the way email proposals of this type \nare\
\ being sent from this part of africa and as regards that, you should \nplease\
\ treat this with utmost attention knowing fully well that you \ncannot and will\
\ not be compelled to assist us if you are not disposed \nto.\n\nContact me via\
\ my email account or you also reach me on this email \naccount [email protected]\
\ with your contact telephone and fax \nnumbers on response, I will call you for\
\ a discussion.\n\nThank you as I await your response.\n\nSincerely,\n\n\nDr.\
\ Rasheed S. Abubakar."
- text: 'How to resolve!
www.rewire.comInternational Financial Services - RewireInternational Financial
Services - RewireGood Day YvonneOpen the attach file sent ,after the departmental
payment receipt has be uploaded we also sent awareness letter note to Mr chalan
which should be sent to your bank directly by chalan,Please ensure chalan uploads
the departmental payment receipt receipt as soon as possible because the amount
to your account is more than $100,000 when converted from pound sterling to USD,please
write him (chalan)as soon as possible to settle thisKind RegardsReire Paying Deptwww.rewire.com'
- text: "Introduction/Business Proposal\n\n\nMy Dear Friend , \nGREETING!!!.With a\
\ humbled heart I commit myself this day to write \nand ask for your benevolence\
\ and kind consideration of \nmy families plight for assistance. I am making this\
\ contact on behalf of my\n family not minding the consequences but hoping that\
\ you would understand our\n predicament and come to our aid and assist us. I\
\ would also kindly apologize\n for any inconvenience or embarrassment this might\
\ cause your person, as we\n neither know each other personally nor have had any\
\ previous contact or\n correspondence. \nI am Julius Nsekou Mobutu Sese Sekou,son\
\ of the late president Mobutu Sese Sekou\n of the Congo Democratic Republic(former\
\ Republic of Zaire). \nThere was unrest (war) in my country which resulted in\
\ the overthrow and\n eventual of my father President \nMobutu Sese Sekou.My family\
\ members have since escaped \nto Morocco while i am presently in Nigeria(West\
\ Africa) on political asylum. \nDue to the political crisis,no member of my family\
\ can go back to the Congo\n Democratic Republic or transact any business investment\
\ there,also my fathers\n properties have been seized and Bank accounts frozen\
\ by the Government of\n Lawrent Joseph Kabila. \nBefore my father died ,he deposited\
\ the sum of $50.5 MILLION(USD) CASH in a\n PRIVATE SECURITY VAULT in Europe.Please\
\ we need your assistance in moving and\n securing this money in your bank accounts\
\ abroad,my family will compensate you\n adequately with 20% of the total amount\
\ for your assistance and co operation. \nMy family will want to invest this money\
\ abroad,and for this reason, i sincerely\n appeal to you to help us in setting\
\ up this business.May i also state that you\n will advice on areas of investment\
\ as regards your business and your country as\n the families foreign partner.\
\ \nI look forward to further co-operation from you and will be grateful for your\n\
\ immediate response through the underlisted mediums. \nReply back to E-mail:\
\ [email protected]\nYours Sincerely, \nJulius Nsekou Mobutu & Entire Family."
- text: "FAMILY BUSINESS ASSISTANCE\n\n\nHIGHLY CONFIDENTIAL\nFROM: Prince Tunde O\
\ Adisaraki \nMOBILE:234-90-509398\nMOBILE:234-80-33254029\nFAX:234-92726808\n\
\ \n \nGreetings, \n \nThis letter might surprise you because we have not met\
\ neither in person nor by correspondence. But I believe it is one day that you\
\ get to know somebody either in physical or through correspondence. I got your\
\ contact through some discreet inquiry from the chamber of commerce and industry,\
\ you and your organization were revealed as being quite astute in private entrepreneurship,\
\ one has no doubt in your ability to handle a financial business transaction.\n\
\ \nHowever,I am the first son of his Royal Majesty, Iginawari Nweke Adisaraki\
\ III and the traditional ruler of Eleme Province in the oil area of Rivers State\
\ of Nigeria. I am making this contact to you in respect of US 28,000,000.00 (Twenty\
\ eight million United States Dollars) which I inherited from my late father.\
\ This money was accumulated from royalties paid to my father as compensation\
\ by the oil firms located in our area as a result of oil presence on our land\
\ which hamper agriculture which is our major source of livelihood. Unfortunately\
\ my father died from protracted diabetes. But before his he called my attention\
\ and informed me that he lodged some funds on a two boxes with a security firm\
\ with an open beneficiary status. The lodgment Security Code Number was also\
\ revealed to me, he then advised me to look for a reliable business partner abroad,\
\ who will assist me in investing the money in a lucrative business as a result\
\ of economic instability in Nigeria.\n \nSo this is the main reason why I am\
\ contacting you for us to move this money from the security firm to any country\
\ of your choice for investment purposes. So I will like you to be the ultimate\
\ beneficiary, so that the funds can be moved in your name and particulars to\
\ any country of your choice where it will be claimed and invested. Hence my father\
\ have had intimated the security firm personnel that the beneficiary of the Box\
\ is his foreign partner whose particulars will be forwarded to the firm when\
\ due. \n \nBut I will guide you accordingly. As soon as the fund reaches, I will\
\ then come over to meet you in person, so that we can discuss physically on Investment\
\ entials.Based on this instance I and my family have unanimously decided to give\
\ you 20% of the total money and annual 5% of the after tax returns on investment\
\ for the first three years. Thereafter, the term shall be varied. 2% for charity\
\ homes and 3% for expenses, which may arise during the transaction, fax and phone\
\ bills inclusive. The balance of 70% you will invest and manage for my family.\
\ I hereby guarantee you that this is not government money, it is not drug money\
\ and it is not money from arms deal.\nThough you have to maintain high degree\
\ of confidentiality on this matter. \n \nI will give you all proof of deposit\
\ and existence of money once urged and fully satisfied with you capability and\
\ honesty. I hope this will be the beginning of a prosperous relationship between\
\ my family and your family. Nevertheless if you are for any reason not interest,\
\ kindly inform me immediately so that I will look for another contact.\n \nI\
\ required also your private phone and fax numbers for easy communication.I am\
\ waiting for your quick response through my private phone or fax Number.\n \n\
I am waiting for your quick response. \n \nYours faithfully, \n \nPrince Tunde\
\ Olusola Adisaraki (For the Family)"
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.96875
name: Accuracy
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'HELP ME AND MY FAMILY PLEASE.\n\n\nDEAR FRIEND,\n\nTHROUGH THE COURTESY OF BUSINESS OPPORTUNITY, I TAKE LIBERTY ANCHORED ON A\nSTRONG DESIRE TO SOLICIT YOUR ASSISTANCE ON THIS MUTUALLY BENEFICIAL AND\nRISKFREE TRANSACTION WHICH I HOPE YOU WILL GIVE YOUR URGENT ATTENTION.\n\nI AM MR.SESAY MASSAQUOE I AM MOVED TO WRITE YOU THIS LETTER ,THIS WAS IN\nCONFIDENCE CONSIDERING OUR PRESENT CIRCUMSTANCE AND SITUATION.\n\nI ESCAPED WITH MY WIFE AND CHILDREN OUT OF SIERRA- LEONE TO\nGROU-JIRNSSUM,A VILLAGE IN THE NETHERLANDS THROUGH THE AID OF THE UNITED\nNATIONS EVACUATION TEAM WHERE WE ARE NOW PRESENTLY RESIDING ON TEMPORARY\nPOLITICAL ASYLUM.\n\nHOWEVER DUE TO THIS SITUATION I DECIDED TO CHANGE MOST OF MY BILLIONS OF\nDOLLARS DEPOSITED IN SWISS BANK AND OTHER COUNTRIES INTO OTHER FORMS OF\nMONEY CODED FOR SAFE PURPOSE BECAUSE THE NEW HEAD OF STATES AHMED TEJAN\nKABBA MADE ARRANGEMENTS WITH THE SWISS GOVERNMENT AND OTHER EUROPEAN\nCOUNTRIES TO FREEZE ALL MY TREASURES DEPOSITED IN SOME EUROPEAN\nCOUNTRIES,HENCE I AND MY WIFE ALONG WITH MY CHILDREN, DECIDED LAYING LOW\nIN THIS OUR TEMPOERY POLITICAL ASYLUM CAMP HERE IN GROU JIRNSSUM IN THE\nNETHERLANDS TO STUDY THE SITUATION TILL WHEN THINGS GETS BETTER,SINCE\nPRESIDENT TEJAN KABBA TAKING OVER GOVERNMENT AGAIN IN SIERRA-LEONE ONE OF\nMY CHATEAUX IN SOUTHERN FRANCE WAS CONFISCATED BY THE FRENCH\nGOVERNMENT,AND AS SUCH WE HAD TO CHANGE OUR IDENTITY SO THAT OUR\nINVESTMENT WILL NOT BE TRACED AND CONFISCATED.\n\nI HAVE DEPOSITED THE SUM OF THIRTY MILLION,FIVE HUNDRED THOUSAND UNITED\nSTATES DOLLARS(US$30,500,000)WITH A SECURITY COMPANY FOR SAFEKEEPING.\nTHE FUNDS ARE SECURITY CODED TO PREVENT THEM FROM KNOWING THE ACTUAL\nCONTENTS.\n\nWHAT I WANT YOU TO DO NOW IS TO INDICATE YOUR INTEREST THAT YOU WILL\nASSIST ME AND MY IMMEDIATE FAMILY BY RECEIVING THE MONEY ON OUR BEHALF.\nTHE ACCOUNT REQUIRED FOR THIS PROJECT CAN EITHER BE PERSONAL,COMPANY OR AN\nOFFSHORE ACCOUNT THAT YOU HAVE TOTAL CONTROL OVER,YOUR AREA OF\nSPECIALISATION WILL NOT BE A HINDERANCE TO THE SUCCESSFUL EXECUTION OF\nTHIS TRANSACTION.\n\nACKOWLEDGE THIS MESSAGE,SO THAT I CAN INTRODUCE YOU TO MY FAMILY AS OUR\nFOREIGN TRUSTED PARTNER WHO SHALL TAKE CHARGE OF OUR INVESTMENT ABROAD\nWHERE WE NOW PLAN TO SETTLE.\n\nI WANT YOU TO ASSIST US IN INVESTING THIS MONEY,BUT I WILL NOT WANT OUR\nIDENTITY REVEALED.I WILL ALSO WANT TO BUY PROPERTIES AND STOCKS IN\nMULTI-NATIONAL COMPANIES AND TO ENGAGE IN OTHER SAFE AND NON SPECULATIVE\nINVESTMENTS.\nWE HAVE BEEN THROUGH A LOT OF HEALTH AND SPIRITUAL TURMOIL,HENCE WILL NEED\nYOUR UNDERSTANDING AND ASSISTANCE.\n\nMAY I AT THIS POINT EMPHASIZE THE HIGH LEVEL OF CONFIDENTIALLITY WHICH\nTHIS BUSINESS DEMANDS AND HOPE YOU WILL NOT BETRAY THE TRUST AND\nCONFIDENCE WHICH WE REPOSE IN YOU.I SHALL PUT YOU IN THE PICTURE OF THIS\nBUSINESS,I.E TELL YOU WHERE THE FUNDS ARE CURRENTLY BEING MAINTAINED AND\nALSO DISCUSS OTHER MODALITIES INCLUDING REMUNERATION FOR YOUR SERVICES.\n\nI SHALL INFORM YOU WITH THE NEXT LINE OF ACTION AS SOON AS I RECEIVE YOUR\nPOSITIVE RESPONSE.\n\nIS THIS PROPOSITION ATTAINABLE?IF IT IS,PLEASE KINDLY FURNISH ME\nIMMEDIATELY BY E-MAIL WITH YOUR DIRECT TELEPHONE AND FAX NUMBERS TO\nENHANCE THE CONFIDENTIALLITY WHICH THIS BUSINESS DEMANDS.\n\nBEST REGARDS\nMR.SESAY MASSAQUOE.\nREPLY TO MY PRIVATE EMAIL ADDRESS...........>[email protected]\n\n\n__________________________________________________________ \n For special offers on latest publications on Malta or by Maltese authors go to http://shop.di-ve.com'</li><li>'New USDT Wallet Address for Payment\n\n\nDear customer Batel11,We want to inform you of an important update regarding our payment methods. As part of our ongoing efforts to streamline our payment processes and enhance security, we have established a new USDT (Tron) wallet address for receiving payments.New USDT Wallet Address: TPNq8zpLivwQi9FyaWhuycghYgB2i9RV4pPlease make sure to double-check the new wallet address before making any payments to avoid any potential issues. If you have any questions or need assistance with this update, please do not hesitate to contact our customer support team.Warm regards,'</li><li>"URGENT\n\n\nAttn: The President, \n\nDear Sir, \n\nMy mail may come to you as a surprise, but sincerely this is a \nproposal for a business deal that will benefit both of us. I am \ncontacting you after a frantic search for a person who will be \ntrustworthy and capable of handling a business of this dimension. \n\nMy name is Mr. Jonathan Mokoena, the Under-Secretary in charge of \nIntergration at the Specialized Technical Committee of the African \nUnion (AU), formerly Organization of Afriacn Unity (OAU). You may be \naware of the transformation of the OAU to AU, and the mandate to \nbuild a new united Africa modelled on the pattern of European Union \n(EU). For this therefore, the various African leaders recently \ninaugurated the New Patnership for African Development (NEPAD). NEPAD \nis to streamline Africa towards achieving a common market, defence \nforce, currency, foreign policy, judiciary etc. For the above, the \nvarious African countries have made whosoever contributions in \nhundreds of million dollars. We have equally received grants/aids \nfrom the EU, USA and other international governments and agencies. \nThese moneies in all have ran into millions of dollars. \n\n\nAs the officer in charge of receiving and managing these funds and \nexecuting the projects for which they are ment for, I have received \nall the money expected. I have also prepared my account which I have \nsubmitted to the AU High Command, and it has been approved by the AU \nSecratary-General, Dr. Amara Essy. However, in some of the money \nreceived, some of the donor countries and international bodies \nremitted to us amounts in excess of what they pledged. The AU before \nnow, has written to all of them to acknowledge the receipt of the \nmonies from them. The money in excess and which I have kept out with \nonly me having knowledge of it, is in the tune of Thirty-Five Million United States Dollars (US$35,000,000.00). As it is now, this money belongs to me, as neither the AU nor any of the donor countries/international agencies has declared their money missing. \n\n\nI am therefore contacting you to assist me with the movement and \nsafe-keeping of this fund. As a public officer in my category, I \ncannot openly put this money into any bank here in Addis Ababa, \nEthiopia, the AU headquarters where I am now, or in any other part of \nAfrica, as an account holder. This will surely raise eyebrows and \nexpose me. I have therefore concealed this amount of US$35M in four \nmetal trunk boxes, and declared them as artefacts belonging to a \nforeigner. I deposited the boxes with a Security Company based in \nSpain which has an affliate offices in Ghana, Cot d'Ivoire and South Africa. These cities are safe havens for this kind of transaction. \n\nThis transaction will however be hitch-free. So, I would therefore \nwant you to be in Banjul, The Gambia for the clearing and claiming of \nthis fund. I will furnish you with information/documents on how \n\nyou will stand as the beneficiary of the boxes. I have decided to \ngive to you 40% of the total amount involved. \n\nPlease I will want you to contact me on this e-mail address or the \nalternative: ([email protected]). \n\n\nAlso, you have to assure me of the secrecy and confidentiality in \nthis transaction. \n\nThanks in anticipation of your valued co-operation. \n\nMr. Jonathan Mokoena."</li></ul> |
| 0 | <ul><li>'empty\n\n\nhello'</li><li>'Re: Hello\n\n\nHmm On Mar 11 2024 08:31 PM TestUser21 wrote:It works!"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9688 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("rendulic/setfit-ll-MiniLM-L6-v2-email-fraud-2024-05-18")
# Run inference
preds = model("How to resolve!
www.rewire.comInternational Financial Services - RewireInternational Financial Services - RewireGood Day YvonneOpen the attach file sent ,after the departmental payment receipt has be uploaded we also sent awareness letter note to Mr chalan which should be sent to your bank directly by chalan,Please ensure chalan uploads the departmental payment receipt receipt as soon as possible because the amount to your account is more than $100,000 when converted from pound sterling to USD,please write him (chalan)as soon as possible to settle thisKind RegardsReire Paying Deptwww.rewire.com")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 260.5 | 816 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 18 |
| 1 | 14 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 60
- body_learning_rate: (0.0001, 0.0001)
- head_learning_rate: 0.0001
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0083 | 1 | 0.2559 | - |
| 0.4167 | 50 | 0.0007 | - |
| 0.8333 | 100 | 0.0002 | - |
| 1.25 | 150 | 0.0002 | - |
| 1.6667 | 200 | 0.0001 | - |
| 2.0833 | 250 | 0.0001 | - |
| 2.5 | 300 | 0.0001 | - |
| 2.9167 | 350 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
NikolayKozloff/llama-3-typhoon-v1.5-8b-instruct-Q6_K-GGUF | NikolayKozloff | 2024-05-20T17:06:38Z | 0 | 1 | null | [
"gguf",
"instruct",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"th",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-20T17:06:18Z | ---
language:
- en
- th
license: llama3
tags:
- instruct
- chat
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# NikolayKozloff/llama-3-typhoon-v1.5-8b-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`scb10x/llama-3-typhoon-v1.5-8b-instruct`](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/llama-3-typhoon-v1.5-8b-instruct-Q6_K-GGUF --model llama-3-typhoon-v1.5-8b-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/llama-3-typhoon-v1.5-8b-instruct-Q6_K-GGUF --model llama-3-typhoon-v1.5-8b-instruct.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-typhoon-v1.5-8b-instruct.Q6_K.gguf -n 128
```
|
jerryyun/kicon_mixtral87_merged_torch212 | jerryyun | 2024-05-20T17:03:40Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-20T16:58:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
frost19k/dolphin-2.8-mistral-11b-v02-code-ft | frost19k | 2024-05-20T17:03:21Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"Nondzu/Mistral-7B-Instruct-v0.2-code-ft",
"conversational",
"base_model:Nondzu/Mistral-7B-Instruct-v0.2-code-ft",
"base_model:merge:Nondzu/Mistral-7B-Instruct-v0.2-code-ft",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:merge:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T16:24:24Z | ---
tags:
- merge
- mergekit
- lazymergekit
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- Nondzu/Mistral-7B-Instruct-v0.2-code-ft
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- Nondzu/Mistral-7B-Instruct-v0.2-code-ft
---
# dolphin-2.8-mistral-11b-v02-code-ft
dolphin-2.8-mistral-11b-v02-code-ft is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [Nondzu/Mistral-7B-Instruct-v0.2-code-ft](https://huggingface.co/Nondzu/Mistral-7B-Instruct-v0.2-code-ft)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0, 8]
- sources:
- model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft
layer_range: [4, 14]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [10, 20]
- sources:
- model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft
layer_range: [16, 26]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [22, 32]
merge_method: passthrough
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "frost19k/dolphin-2.8-mistral-11b-v02-code-ft"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
BilalMuftuoglu/beit-base-patch16-224-65-fold4 | BilalMuftuoglu | 2024-05-20T17:02:46Z | 50 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T16:33:45Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-65-fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8732394366197183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-65-fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5415
- Accuracy: 0.8732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 3 | 0.7415 | 0.5352 |
| No log | 1.8462 | 6 | 0.7177 | 0.4507 |
| No log | 2.7692 | 9 | 0.6709 | 0.6056 |
| 0.748 | 4.0 | 13 | 0.6333 | 0.6338 |
| 0.748 | 4.9231 | 16 | 0.6162 | 0.7324 |
| 0.748 | 5.8462 | 19 | 0.6303 | 0.6338 |
| 0.6397 | 6.7692 | 22 | 0.5950 | 0.6761 |
| 0.6397 | 8.0 | 26 | 0.6325 | 0.6056 |
| 0.6397 | 8.9231 | 29 | 0.5799 | 0.7042 |
| 0.5957 | 9.8462 | 32 | 0.5793 | 0.6901 |
| 0.5957 | 10.7692 | 35 | 0.5869 | 0.7183 |
| 0.5957 | 12.0 | 39 | 0.6195 | 0.5775 |
| 0.5676 | 12.9231 | 42 | 0.5940 | 0.6479 |
| 0.5676 | 13.8462 | 45 | 0.6612 | 0.6197 |
| 0.5676 | 14.7692 | 48 | 0.5598 | 0.7465 |
| 0.5952 | 16.0 | 52 | 0.5472 | 0.7465 |
| 0.5952 | 16.9231 | 55 | 0.4823 | 0.7887 |
| 0.5952 | 17.8462 | 58 | 0.6493 | 0.6901 |
| 0.4908 | 18.7692 | 61 | 0.5539 | 0.7465 |
| 0.4908 | 20.0 | 65 | 0.5406 | 0.7606 |
| 0.4908 | 20.9231 | 68 | 0.5443 | 0.7606 |
| 0.4474 | 21.8462 | 71 | 0.6548 | 0.7042 |
| 0.4474 | 22.7692 | 74 | 0.4924 | 0.7746 |
| 0.4474 | 24.0 | 78 | 0.4671 | 0.8169 |
| 0.4106 | 24.9231 | 81 | 0.4117 | 0.8310 |
| 0.4106 | 25.8462 | 84 | 0.4630 | 0.8592 |
| 0.4106 | 26.7692 | 87 | 0.4915 | 0.8310 |
| 0.3163 | 28.0 | 91 | 0.6336 | 0.8028 |
| 0.3163 | 28.9231 | 94 | 0.5920 | 0.7887 |
| 0.3163 | 29.8462 | 97 | 0.5653 | 0.8028 |
| 0.3234 | 30.7692 | 100 | 0.6411 | 0.7746 |
| 0.3234 | 32.0 | 104 | 0.6728 | 0.7887 |
| 0.3234 | 32.9231 | 107 | 0.5503 | 0.8028 |
| 0.2969 | 33.8462 | 110 | 0.4914 | 0.8310 |
| 0.2969 | 34.7692 | 113 | 0.5952 | 0.8169 |
| 0.2969 | 36.0 | 117 | 0.7161 | 0.7746 |
| 0.2325 | 36.9231 | 120 | 0.6517 | 0.7746 |
| 0.2325 | 37.8462 | 123 | 0.5832 | 0.7887 |
| 0.2325 | 38.7692 | 126 | 0.6309 | 0.7746 |
| 0.2447 | 40.0 | 130 | 0.8011 | 0.7465 |
| 0.2447 | 40.9231 | 133 | 0.6085 | 0.7887 |
| 0.2447 | 41.8462 | 136 | 0.6470 | 0.7606 |
| 0.2447 | 42.7692 | 139 | 0.7744 | 0.7746 |
| 0.2217 | 44.0 | 143 | 0.5730 | 0.8310 |
| 0.2217 | 44.9231 | 146 | 0.5577 | 0.8169 |
| 0.2217 | 45.8462 | 149 | 0.5226 | 0.8451 |
| 0.2231 | 46.7692 | 152 | 0.5115 | 0.8310 |
| 0.2231 | 48.0 | 156 | 0.5415 | 0.8732 |
| 0.2231 | 48.9231 | 159 | 0.5971 | 0.8310 |
| 0.2014 | 49.8462 | 162 | 0.8717 | 0.7606 |
| 0.2014 | 50.7692 | 165 | 0.7063 | 0.7887 |
| 0.2014 | 52.0 | 169 | 0.6917 | 0.7887 |
| 0.1827 | 52.9231 | 172 | 0.6880 | 0.7887 |
| 0.1827 | 53.8462 | 175 | 0.7027 | 0.8028 |
| 0.1827 | 54.7692 | 178 | 0.6764 | 0.8310 |
| 0.1558 | 56.0 | 182 | 0.7398 | 0.7887 |
| 0.1558 | 56.9231 | 185 | 0.7787 | 0.8169 |
| 0.1558 | 57.8462 | 188 | 0.7678 | 0.8169 |
| 0.1637 | 58.7692 | 191 | 0.7898 | 0.7606 |
| 0.1637 | 60.0 | 195 | 0.7105 | 0.8310 |
| 0.1637 | 60.9231 | 198 | 0.7262 | 0.8592 |
| 0.1591 | 61.8462 | 201 | 0.7464 | 0.8169 |
| 0.1591 | 62.7692 | 204 | 0.7233 | 0.8310 |
| 0.1591 | 64.0 | 208 | 0.7263 | 0.8310 |
| 0.1521 | 64.9231 | 211 | 0.7377 | 0.8028 |
| 0.1521 | 65.8462 | 214 | 0.7267 | 0.8310 |
| 0.1521 | 66.7692 | 217 | 0.7178 | 0.8169 |
| 0.157 | 68.0 | 221 | 0.8585 | 0.7887 |
| 0.157 | 68.9231 | 224 | 0.8629 | 0.7887 |
| 0.157 | 69.8462 | 227 | 0.7329 | 0.8028 |
| 0.1593 | 70.7692 | 230 | 0.6997 | 0.8310 |
| 0.1593 | 72.0 | 234 | 0.8074 | 0.8028 |
| 0.1593 | 72.9231 | 237 | 1.0352 | 0.7887 |
| 0.134 | 73.8462 | 240 | 1.0472 | 0.7887 |
| 0.134 | 74.7692 | 243 | 0.7477 | 0.8169 |
| 0.134 | 76.0 | 247 | 0.7357 | 0.8310 |
| 0.1386 | 76.9231 | 250 | 0.8497 | 0.7887 |
| 0.1386 | 77.8462 | 253 | 0.9464 | 0.7746 |
| 0.1386 | 78.7692 | 256 | 0.8535 | 0.7887 |
| 0.1246 | 80.0 | 260 | 0.7998 | 0.8310 |
| 0.1246 | 80.9231 | 263 | 0.8214 | 0.8310 |
| 0.1246 | 81.8462 | 266 | 0.8374 | 0.8028 |
| 0.1246 | 82.7692 | 269 | 0.8597 | 0.8028 |
| 0.1271 | 84.0 | 273 | 0.8437 | 0.8028 |
| 0.1271 | 84.9231 | 276 | 0.8370 | 0.8028 |
| 0.1271 | 85.8462 | 279 | 0.8298 | 0.8028 |
| 0.1274 | 86.7692 | 282 | 0.8340 | 0.8028 |
| 0.1274 | 88.0 | 286 | 0.8462 | 0.8028 |
| 0.1274 | 88.9231 | 289 | 0.8594 | 0.8028 |
| 0.1251 | 89.8462 | 292 | 0.8504 | 0.8028 |
| 0.1251 | 90.7692 | 295 | 0.8480 | 0.8028 |
| 0.1251 | 92.0 | 299 | 0.8471 | 0.8028 |
| 0.1207 | 92.3077 | 300 | 0.8469 | 0.8028 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AI4BPM/accounts_receivable_process_model | AI4BPM | 2024-05-20T17:01:37Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-20T16:58:29Z | ---
license: apache-2.0
---
|
rongsen/nlp_task2 | rongsen | 2024-05-20T16:58:37Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T16:51:56Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
savers1/hillside | savers1 | 2024-05-20T16:56:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T16:43:09Z | ---
license: apache-2.0
---
|
Aratako/Ninja-v1-RP-WIP | Aratako | 2024-05-20T16:56:00Z | 54 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"ja",
"dataset:Aratako/Rosebleu-1on1-Dialogues-RP",
"dataset:Aratako/LimaRP-augmented-ja-karakuri",
"dataset:grimulkan/LimaRP-augmented",
"dataset:Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:OmniAICreator/Japanese-Roleplay",
"base_model:Local-Novel-LLM-project/Ninja-v1-NSFW",
"base_model:finetune:Local-Novel-LLM-project/Ninja-v1-NSFW",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T15:31:02Z | ---
license: apache-2.0
datasets:
- Aratako/Rosebleu-1on1-Dialogues-RP
- Aratako/LimaRP-augmented-ja-karakuri
- grimulkan/LimaRP-augmented
- Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- OmniAICreator/Japanese-Roleplay
language:
- ja
library_name: transformers
tags:
- roleplay
base_model:
- Local-Novel-LLM-project/Ninja-v1-NSFW
---
# Ninja-v1-RP-WIP
## 概要
[Local-Novel-LLM-project/Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW)をロールプレイ用にLoRAでファインチューニングしたモデルです。
[Aratako/Ninja-v1-RP](https://huggingface.co/Aratako/Ninja-v1-RP)のベースとなるモデルとして利用しています。
## プロンプトフォーマット
Vicunaのchat templateを利用してください。また、設定などを渡すシステムプロンプトは最初の`USER: `より前に入力されることを想定しています。
また、マルチターンの対話を行う場合各ターンのアシスタントの応答の末尾に`eos_token`を必ずつけてください。
```
{ロールプレイの指示、世界観・あらすじの説明、キャラの設定など}
USER: {userの最初の入力}
ASSISTANT:
```
## 学習データセット
GPTやLlama2等の出力の学習利用時に問題があるモデルを使って作成されたデータセットは一切使っていません。
### 日本語データセット
- [Aratako/Rosebleu-1on1-Dialogues-RP](https://huggingface.co/datasets/Aratako/Rosebleu-1on1-Dialogues-RP)
- [Aratako/LimaRP-augmented-ja-karakuri](https://huggingface.co/datasets/Aratako/LimaRP-augmented-ja-karakuri)
- [Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja](https://huggingface.co/datasets/Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja)
- [OmniAICreator/Japanese-Roleplay](https://huggingface.co/datasets/OmniAICreator/Japanese-Roleplay)
### 英語データセット
- [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented)
- [SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed](https://huggingface.co/datasets/SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed)
## 学習の設定
RunpodでGPUサーバを借り、A6000x4で学習を行いました。主な学習パラメータは以下の通りです。
- lora_r: 128
- lisa_alpha: 256
- lora_dropout: 0.05
- lora_target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "lm_head"]
- learning_rate: 2e-5
- num_train_epochs: 3 epochs
- batch_size: 64
- max_seq_length: 4096 |
HariprasathSB/whisper-vulnerable | HariprasathSB | 2024-05-20T16:54:33Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:sujith013/whisper-meduim-tamil-vulnerable",
"base_model:finetune:sujith013/whisper-meduim-tamil-vulnerable",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T15:38:07Z | ---
license: apache-2.0
base_model: sujith013/whisper-meduim-tamil-vulnerable
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-vulnerable
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-vulnerable
This model is a fine-tuned version of [sujith013/whisper-meduim-tamil-vulnerable](https://huggingface.co/sujith013/whisper-meduim-tamil-vulnerable) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9527
- Wer: 78.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.136 | 1.7621 | 200 | 0.9527 | 78.7841 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
GabSo/CLIP_Perso_flikr30k | GabSo | 2024-05-20T16:52:03Z | 0 | 0 | null | [
"en",
"dataset:atasoglu/flickr8k-dataset",
"license:mit",
"region:us"
] | null | 2024-05-20T15:26:41Z | ---
license: mit
datasets:
- atasoglu/flickr8k-dataset
language:
- en
--- |
binnybn98/Llama-3-8B-4bit-ruozhiba-finetune | binnybn98 | 2024-05-20T16:48:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T16:48:20Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** binnybn98
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dane2180/MyFirstModel | Dane2180 | 2024-05-20T16:42:33Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-bnb-4bit",
"region:us"
] | null | 2024-05-20T16:36:31Z | ---
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
selmamalak/chestmnist-swin-base-finetuned | selmamalak | 2024-05-20T16:40:53Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:medmnist-v2",
"base_model:microsoft/swin-large-patch4-window7-224-in22k",
"base_model:adapter:microsoft/swin-large-patch4-window7-224-in22k",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T16:40:44Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/swin-large-patch4-window7-224-in22k
datasets:
- medmnist-v2
model-index:
- name: chestmnist-swin-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chestmnist-swin-base-finetuned
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-large-patch4-window7-224-in22k) on the medmnist-v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
qihoo360/llama3-8B-360Zhinao-360k-Instruct | qihoo360 | 2024-05-20T16:39:55Z | 11 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"zh",
"dataset:LargeWorldModel/ultrachat_qa_mix_512K",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T12:04:37Z | ---
license: apache-2.0
datasets:
- LargeWorldModel/ultrachat_qa_mix_512K
language:
- en
- zh
---
# Model Card for llama3-8B-360Zhinao-360k-Instruct
llama3-8B-360Zhinao-360k-Instruct is 360Zhinao's extension of llama3-8B-Instruct to a 360k context window [[GitHub]](https://github.com/Qihoo360/360zhinao/tree/main/360k).
Within the 360k-token length,
llama3-8B-360Zhinao-360k-Instruct achieves:
- **100%** perfect recall on the "value retrieval" variant of NIAH (Needle-In-A-Haystack), which requires the model to retrieve the number in the inserted needle "The special magic {random city} number is {random 7-digit number}".
- **99.75%** near-perfect recall on the [original NIAH](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) and its corresponding Chinese counterpart, where the needle (e.g. The best thing to do in San Francisco is...) and haystack (e.g. Paul Graham's essays which inevitably talk about San Francisco) are more relevant, hence a more difficult task.
Other models with 100% recall on value retrieval could struggle with this NIAH version.
## 360k-NIAH (Needle-In-A-Haystack) results
### "value retrieval" variant of NIAH
<img src="https://github.com/Qihoo360/360zhinao/blob/main/assets/llama3-8B-360Zhinao-360k-Instruct.value_score.png?raw=true" width="600" />
### Original NIAH
<img src="https://github.com/Qihoo360/360zhinao/blob/main/assets/llama3-8B-360Zhinao-360k-Instruct.en_score.png?raw=true" width="600" />
### Chinese NIAH
<img src="https://github.com/Qihoo360/360zhinao/blob/main/assets/llama3-8B-360Zhinao-360k-Instruct.zh_score.png?raw=true" width="600" />
### Remarks
We found that [the "value retrieval" variant of NIAH](https://github.com/Arize-ai/LLMTest_NeedleInAHaystack) (widely used recently in e.g. Gemini, LWM and gradient.ai) is relatively easy.
100% all-green results on value retrieval doesn't necessarily mean near-perfect results on more difficult NIAH tasks, as demonstrated by this [original-version NIAH](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) result of one open-sourced llama3-8B-262k model:
<img src="https://github.com/Qihoo360/360zhinao/blob/main/assets/open-262k.en_score.png?raw=true" width="600" />
This model does achieve 100% all-green results on value retrieval but less than satisfactory results on the original version.
### Reproduce
[360k/niah](https://github.com/Qihoo360/360zhinao/blob/main/360k/niah/) generates the raw results.
The score for value retrieval NIAH is calculated on-the-fly when generating the raw results, while the actual score of original and Chinese NIAH is calculated in [360k/plot](https://github.com/Qihoo360/360zhinao/blob/main/360k/plot/).
For the original version, 100% score is given if the regular expression `sandwich.+?dolores.+?sunny` matches the model output, and edit distance otherwise.
For the Chinese version, 100% score is given if `刘秀` is present in the model output, and edit distance otherwise. For the English-biased llama3 models this may not be perfect.
## Usage
llama3-8B-360Zhinao-360k-Instruct could be launched with [vllm](https://github.com/vllm-project/vllm).
To perform inference on 360k-token inputs, we used a 8 x 80G machine (A800).
```shell
model_path=${1}
export ENV_PORT=7083
export ENV_TP=8
export ENV_MODEL_PATH=$model_path
echo ${ENV_MODEL_PATH}
export ENV_MAX_MODEL_LEN=365000
export ENV_MAX_BATCH_TOKENS=365000
export ENV_GPU_MEMORY_UTIL=0.6
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256
python -m vllm.entrypoints.openai.api_server \
--model "${ENV_MODEL_PATH:-/workspace/model}" \
--tensor-parallel-size "${ENV_TP:-2}" \
--trust-remote-code \
--port "${ENV_PORT:-8002}" \
--gpu-memory-utilization "${ENV_GPU_MEMORY_UTIL:-0.92}" \
--max-num-batched-tokens "${ENV_MAX_BATCH_TOKENS:-18000}" \
--max-model-len "${ENV_MAX_MODEL_LEN:-4096}" \
--max-num-seqs "${ENV_MAX_NUM_SEQS:-32}" \
--enforce-eager \
> log8.server 2>&1
```
## Methods
llama3-8B-360Zhinao-360k-Instruct is trained from [llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Its original context-length is 8k with RoPE base 500,000.
We directly extended its context length to 360k. We changed RoPE base to 500,000,000 and trained on a combined SFT dataset of [LWM's open-sourced data](https://huggingface.co/LargeWorldModel) and internal long-context data in Chinese and English.
We implemented SFT on top of [EasyContext](https://github.com/jzhang38/EasyContext/) ([code](https://github.com/Qihoo360/360zhinao/blob/main/360k/train.sft.EasyContext.py) with simple derivation on loss reduction), but later found that turning on pretraining loss produced much better results.
SFT is likely suitable for further finetuning within the already extended context window.
We have been using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) for several months with tailored optimization on GPU memory. Its context parallelism wasn’t quite ready back then and we have now switched to ring attention implementations such as EasyContext.
## Contact & License
Email: [email protected]
The source code of this repository follows the open-source license Apache 2.0.
This project is built on other open-source projects, including llama3, LWM and EasyContext, whose original licenses should also be followed by users. |
allopeap/tmp_trainer | allopeap | 2024-05-20T16:36:39Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T16:35:35Z | ---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
redponike/Yi-1.5-9B-32K-GGUF | redponike | 2024-05-20T16:36:21Z | 2 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T13:09:30Z | GGUF quants of [01-ai/Yi-1.5-9B-32K](https://huggingface.co/01-ai/Yi-1.5-9B-32K) |
antitheft159/wupitul.195 | antitheft159 | 2024-05-20T16:34:53Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-05-20T16:33:35Z | ---
license: cc-by-nc-sa-4.0
---
|
BilalMuftuoglu/beit-base-patch16-224-65-fold3 | BilalMuftuoglu | 2024-05-20T16:33:38Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T16:04:33Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-65-fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8591549295774648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-65-fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5711
- Accuracy: 0.8592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 3 | 0.8549 | 0.5211 |
| No log | 1.8462 | 6 | 0.6976 | 0.5634 |
| No log | 2.7692 | 9 | 0.6809 | 0.5634 |
| 0.7778 | 4.0 | 13 | 0.6459 | 0.6056 |
| 0.7778 | 4.9231 | 16 | 0.6353 | 0.6338 |
| 0.7778 | 5.8462 | 19 | 0.6141 | 0.6197 |
| 0.6542 | 6.7692 | 22 | 0.6003 | 0.6056 |
| 0.6542 | 8.0 | 26 | 0.6168 | 0.6761 |
| 0.6542 | 8.9231 | 29 | 0.5781 | 0.6901 |
| 0.5817 | 9.8462 | 32 | 0.5710 | 0.7324 |
| 0.5817 | 10.7692 | 35 | 0.5345 | 0.7465 |
| 0.5817 | 12.0 | 39 | 0.6058 | 0.6479 |
| 0.513 | 12.9231 | 42 | 0.6433 | 0.7042 |
| 0.513 | 13.8462 | 45 | 0.5830 | 0.7042 |
| 0.513 | 14.7692 | 48 | 0.6167 | 0.7042 |
| 0.4756 | 16.0 | 52 | 0.7304 | 0.6338 |
| 0.4756 | 16.9231 | 55 | 0.5485 | 0.7606 |
| 0.4756 | 17.8462 | 58 | 0.5166 | 0.7606 |
| 0.4123 | 18.7692 | 61 | 0.6267 | 0.7746 |
| 0.4123 | 20.0 | 65 | 0.4253 | 0.8169 |
| 0.4123 | 20.9231 | 68 | 0.4698 | 0.7746 |
| 0.3745 | 21.8462 | 71 | 0.5312 | 0.7887 |
| 0.3745 | 22.7692 | 74 | 0.5158 | 0.7465 |
| 0.3745 | 24.0 | 78 | 0.5969 | 0.8028 |
| 0.3751 | 24.9231 | 81 | 0.5419 | 0.7606 |
| 0.3751 | 25.8462 | 84 | 0.4630 | 0.8028 |
| 0.3751 | 26.7692 | 87 | 0.5367 | 0.8028 |
| 0.3079 | 28.0 | 91 | 0.5220 | 0.8310 |
| 0.3079 | 28.9231 | 94 | 0.5342 | 0.7887 |
| 0.3079 | 29.8462 | 97 | 0.5711 | 0.8592 |
| 0.2831 | 30.7692 | 100 | 0.5757 | 0.7606 |
| 0.2831 | 32.0 | 104 | 0.5200 | 0.7465 |
| 0.2831 | 32.9231 | 107 | 0.4496 | 0.8451 |
| 0.292 | 33.8462 | 110 | 0.6480 | 0.8169 |
| 0.292 | 34.7692 | 113 | 0.6956 | 0.7465 |
| 0.292 | 36.0 | 117 | 0.5629 | 0.8169 |
| 0.2712 | 36.9231 | 120 | 0.7614 | 0.6901 |
| 0.2712 | 37.8462 | 123 | 0.5625 | 0.8028 |
| 0.2712 | 38.7692 | 126 | 0.5711 | 0.7746 |
| 0.2447 | 40.0 | 130 | 0.5476 | 0.7746 |
| 0.2447 | 40.9231 | 133 | 0.5354 | 0.8028 |
| 0.2447 | 41.8462 | 136 | 0.5217 | 0.8169 |
| 0.2447 | 42.7692 | 139 | 0.5767 | 0.8028 |
| 0.185 | 44.0 | 143 | 0.5606 | 0.8169 |
| 0.185 | 44.9231 | 146 | 0.6719 | 0.7887 |
| 0.185 | 45.8462 | 149 | 0.6074 | 0.7887 |
| 0.1921 | 46.7692 | 152 | 0.6351 | 0.7746 |
| 0.1921 | 48.0 | 156 | 0.5916 | 0.7746 |
| 0.1921 | 48.9231 | 159 | 0.6103 | 0.7887 |
| 0.1844 | 49.8462 | 162 | 0.5758 | 0.7887 |
| 0.1844 | 50.7692 | 165 | 0.5497 | 0.8169 |
| 0.1844 | 52.0 | 169 | 0.5377 | 0.8310 |
| 0.17 | 52.9231 | 172 | 0.6279 | 0.8169 |
| 0.17 | 53.8462 | 175 | 0.5826 | 0.7887 |
| 0.17 | 54.7692 | 178 | 0.7173 | 0.7746 |
| 0.1724 | 56.0 | 182 | 0.5340 | 0.8451 |
| 0.1724 | 56.9231 | 185 | 0.5528 | 0.8592 |
| 0.1724 | 57.8462 | 188 | 0.6547 | 0.7887 |
| 0.1734 | 58.7692 | 191 | 0.5986 | 0.8310 |
| 0.1734 | 60.0 | 195 | 0.6057 | 0.8028 |
| 0.1734 | 60.9231 | 198 | 0.7183 | 0.8028 |
| 0.1582 | 61.8462 | 201 | 0.5912 | 0.8169 |
| 0.1582 | 62.7692 | 204 | 0.6002 | 0.8028 |
| 0.1582 | 64.0 | 208 | 0.7886 | 0.7606 |
| 0.1372 | 64.9231 | 211 | 0.7019 | 0.7887 |
| 0.1372 | 65.8462 | 214 | 0.6460 | 0.8169 |
| 0.1372 | 66.7692 | 217 | 0.6935 | 0.8028 |
| 0.153 | 68.0 | 221 | 0.8108 | 0.7746 |
| 0.153 | 68.9231 | 224 | 0.7539 | 0.7887 |
| 0.153 | 69.8462 | 227 | 0.7090 | 0.7746 |
| 0.1512 | 70.7692 | 230 | 0.7147 | 0.7887 |
| 0.1512 | 72.0 | 234 | 0.8680 | 0.8028 |
| 0.1512 | 72.9231 | 237 | 0.8785 | 0.7887 |
| 0.1381 | 73.8462 | 240 | 0.7413 | 0.7887 |
| 0.1381 | 74.7692 | 243 | 0.7255 | 0.8169 |
| 0.1381 | 76.0 | 247 | 0.7124 | 0.7887 |
| 0.1432 | 76.9231 | 250 | 0.7343 | 0.8028 |
| 0.1432 | 77.8462 | 253 | 0.7404 | 0.8028 |
| 0.1432 | 78.7692 | 256 | 0.6941 | 0.7887 |
| 0.1135 | 80.0 | 260 | 0.6721 | 0.8310 |
| 0.1135 | 80.9231 | 263 | 0.6692 | 0.8310 |
| 0.1135 | 81.8462 | 266 | 0.6880 | 0.8028 |
| 0.1135 | 82.7692 | 269 | 0.6857 | 0.8028 |
| 0.1182 | 84.0 | 273 | 0.6850 | 0.7887 |
| 0.1182 | 84.9231 | 276 | 0.6816 | 0.7887 |
| 0.1182 | 85.8462 | 279 | 0.7048 | 0.7746 |
| 0.1019 | 86.7692 | 282 | 0.7804 | 0.7746 |
| 0.1019 | 88.0 | 286 | 0.8013 | 0.7746 |
| 0.1019 | 88.9231 | 289 | 0.7506 | 0.7606 |
| 0.1163 | 89.8462 | 292 | 0.7047 | 0.7746 |
| 0.1163 | 90.7692 | 295 | 0.6763 | 0.8028 |
| 0.1163 | 92.0 | 299 | 0.6606 | 0.8028 |
| 0.1258 | 92.3077 | 300 | 0.6592 | 0.8028 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
paul-stansifer/tinyllama-qwantz-gen | paul-stansifer | 2024-05-20T16:30:44Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"text-generation",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-05-20T16:24:25Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/tinyllama-bnb-4bit
model-index:
- name: tinyllama-qwantz-gen
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-qwantz-gen
This model is a fine-tuned version of [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4795 | 1.0 | 313 | 1.4780 |
| 1.4449 | 2.0 | 626 | 1.4666 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
dbands/llama-3-8b-instruct-code-instructions-blender-16bit | dbands | 2024-05-20T16:29:06Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit",
"base_model:finetune:dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-05T16:37:31Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NikolayKozloff/Alphacode-MALI-9B-Q8_0-GGUF | NikolayKozloff | 2024-05-20T16:26:41Z | 4 | 1 | null | [
"gguf",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-20T16:26:17Z | ---
language:
- ko
license: cc-by-4.0
tags:
- merge
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# NikolayKozloff/Alphacode-MALI-9B-Q8_0-GGUF
This model was converted to GGUF format from [`Alphacode-AI/Alphacode-MALI-9B`](https://huggingface.co/Alphacode-AI/Alphacode-MALI-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alphacode-AI/Alphacode-MALI-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Alphacode-MALI-9B-Q8_0-GGUF --model alphacode-mali-9b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Alphacode-MALI-9B-Q8_0-GGUF --model alphacode-mali-9b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m alphacode-mali-9b.Q8_0.gguf -n 128
```
|
DUAL-GPO-2/phi-2-irepo-chatml-v9-i1 | DUAL-GPO-2 | 2024-05-20T16:22:22Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-irepo-chatml-merged-i0",
"base_model:adapter:DUAL-GPO/phi-2-irepo-chatml-merged-i0",
"region:us"
] | null | 2024-05-20T13:10:32Z | ---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/phi-2-irepo-chatml-merged-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-irepo-chatml-v9-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-irepo-chatml-v9-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-irepo-chatml-merged-i0](https://huggingface.co/DUAL-GPO/phi-2-irepo-chatml-merged-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
Subsets and Splits