modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Top-Western-Maid-7B-GGUF | mradermacher | 2024-11-03T04:49:10Z | 13 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:saishf/Top-Western-Maid-7B",
"base_model:quantized:saishf/Top-Western-Maid-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T22:32:54Z | ---
base_model: saishf/Top-Western-Maid-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/saishf/Top-Western-Maid-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Top-Western-Maid-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Top-Western-Maid-7B-GGUF/resolve/main/Top-Western-Maid-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf | RichardErkhov | 2024-11-03T04:46:44Z | 9 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T18:48:01Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
WizardLM-Math-70B-v0.1 - GGUF
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/WizardLM-Math-70B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [WizardLM-Math-70B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q2_K.gguf) | Q2_K | 23.71GB |
| [WizardLM-Math-70B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [WizardLM-Math-70B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [WizardLM-Math-70B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [WizardLM-Math-70B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [WizardLM-Math-70B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q3_K.gguf) | Q3_K | 30.99GB |
| [WizardLM-Math-70B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [WizardLM-Math-70B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [WizardLM-Math-70B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [WizardLM-Math-70B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q4_0.gguf) | Q4_0 | 36.2GB |
| [WizardLM-Math-70B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [WizardLM-Math-70B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/blob/main/WizardLM-Math-70B-v0.1.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [WizardLM-Math-70B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q4_K | 38.58GB |
| [WizardLM-Math-70B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [WizardLM-Math-70B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q4_1 | 40.2GB |
| [WizardLM-Math-70B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q5_0 | 44.2GB |
| [WizardLM-Math-70B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [WizardLM-Math-70B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q5_K | 45.41GB |
| [WizardLM-Math-70B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [WizardLM-Math-70B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q5_1 | 48.2GB |
| [WizardLM-Math-70B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q6_K | 52.7GB |
| [WizardLM-Math-70B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_WizardLM-Math-70B-v0.1-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
license: agpl-3.0
tags:
- merge
- wizardlm
- mique
pipeline_tag: text-generation
model-index:
- name: WizardLM-Math-70B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.07
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-v0.1
name: Open LLM Leaderboard
---
## Examples
There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step.
```
Step 1: Identify the initial situation
There are three killers in a room, which we can label as A, B, and C.
Step 2: Describe the event
Someone (let's call this person D) enters the room and kills one of the killers. For the sake of simplicity, let's say that Killer A is killed.
Step 3: Analyze the new situation
After the event, there are now two killers left in the room: B and C. The person who entered the room, D, is also still present. However, since D did not leave the room after killing A, it means that D is also a killer. Therefore, there are actually three killers in the room: B, C, and D.
Final Answer: After the event, there are three killers left in the room: B, C, and D.
```
Mark is faster than Mary , Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step
```
Step 1: Understand the given information
We know that Mark is faster than Mary, and Mary is faster than Joe. This means that Mark is the fastest among the three, followed by Mary, and then Joe.
Step 2: Analyze the question
The question asks if Joe is faster than Mark. Based on the given information, we can conclude that Joe is not faster than Mark. In fact, Mark is faster than Joe.
Final Answer: No, Joe is not faster than Mark.
```
If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously?
```
The time taken for a shirt to dry depends on several factors such as the material of the shirt, the humidity, temperature, wind speed, etc. However, if we assume that all four shirts have similar properties and the weather conditions remain constant, then you can expect the drying time to be proportional to the number of shirts.
In other words, if two shirts take 5 hours to dry, then four shirts will likely take twice as long, or 10 hours. This is because there is more fabric to dry, which means it will absorb more water from the air, thus taking longer to reach equilibrium with the surrounding environment.
However, this is just an estimate, and the actual drying time may vary depending on the specific circumstances.
```
Wrong, it takes the same amount of time!
explain step by step 25-4*2+3=?
```
Step 1: Perform the multiplication operation first.
25 - 4 * 2 + 3 = 25 - 8 + 3
Step 2: Now, perform the subtraction and addition operations from left to right.
25 - 8 + 3 = 17 + 3
Step 3: Finally, perform the remaining addition operation.
17 + 3 = 20
So, 25-4*2+3 equals 20.
```
## Eval
### WizardLM/WizardLM-70B-V1.0 vs. MaziyarPanahi/WizardLM-Math-70B-v0.1

### Leaderboard
```python
{
"all": {
"acc": 0.6914116069568377,
"acc_stderr": 0.03063431437342948,
"acc_norm": 0.6938613221179539,
"acc_norm_stderr": 0.031238741076549784,
"mc1": 0.40269277845777235,
"mc1_stderr": 0.01716883093518722,
"mc2": 0.5707095526544473,
"mc2_stderr": 0.01525040450448649
},
"harness|arc:challenge|25": {
"acc": 0.6322525597269625,
"acc_stderr": 0.014090995618168482,
"acc_norm": 0.6706484641638225,
"acc_norm_stderr": 0.013734057652635474
},
"harness|hellaswag|10": {
"acc": 0.6746664011153157,
"acc_stderr": 0.0046754187743142306,
"acc_norm": 0.8600876319458275,
"acc_norm_stderr": 0.0034618713240671846
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.03317672787533157,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.03317672787533157
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7283018867924528,
"acc_stderr": 0.027377706624670713,
"acc_norm": 0.7283018867924528,
"acc_norm_stderr": 0.027377706624670713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.032166008088022675,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.032166008088022675
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.035331333893236574,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.035331333893236574
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.047551296160629475,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.047551296160629475
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.676595744680851,
"acc_stderr": 0.030579442773610337,
"acc_norm": 0.676595744680851,
"acc_norm_stderr": 0.030579442773610337
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.04113914981189261,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.04113914981189261
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4497354497354497,
"acc_stderr": 0.02562085704293665,
"acc_norm": 0.4497354497354497,
"acc_norm_stderr": 0.02562085704293665
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8129032258064516,
"acc_stderr": 0.022185710092252252,
"acc_norm": 0.8129032258064516,
"acc_norm_stderr": 0.022185710092252252
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8363636363636363,
"acc_stderr": 0.02888787239548795,
"acc_norm": 0.8363636363636363,
"acc_norm_stderr": 0.02888787239548795
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822502,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822502
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.927461139896373,
"acc_stderr": 0.018718998520678178,
"acc_norm": 0.927461139896373,
"acc_norm_stderr": 0.018718998520678178
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7025641025641025,
"acc_stderr": 0.023177408131465953,
"acc_norm": 0.7025641025641025,
"acc_norm_stderr": 0.023177408131465953
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.02904560029061626,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.02904560029061626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.02626502460827588,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.02626502460827588
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4503311258278146,
"acc_stderr": 0.04062290018683776,
"acc_norm": 0.4503311258278146,
"acc_norm_stderr": 0.04062290018683776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8954128440366973,
"acc_stderr": 0.013120530245265593,
"acc_norm": 0.8954128440366973,
"acc_norm_stderr": 0.013120530245265593
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.03367462138896078,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.03367462138896078
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9166666666666666,
"acc_stderr": 0.019398452135813905,
"acc_norm": 0.9166666666666666,
"acc_norm_stderr": 0.019398452135813905
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8860759493670886,
"acc_stderr": 0.020681745135884565,
"acc_norm": 0.8860759493670886,
"acc_norm_stderr": 0.020681745135884565
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.757847533632287,
"acc_stderr": 0.028751392398694755,
"acc_norm": 0.757847533632287,
"acc_norm_stderr": 0.028751392398694755
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097655,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097655
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742179,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742179
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.905982905982906,
"acc_stderr": 0.019119892798924974,
"acc_norm": 0.905982905982906,
"acc_norm_stderr": 0.019119892798924974
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526066,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526066
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8697318007662835,
"acc_stderr": 0.012036729568216054,
"acc_norm": 0.8697318007662835,
"acc_norm_stderr": 0.012036729568216054
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7774566473988439,
"acc_stderr": 0.02239421566194282,
"acc_norm": 0.7774566473988439,
"acc_norm_stderr": 0.02239421566194282
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5553072625698324,
"acc_stderr": 0.016619881988177012,
"acc_norm": 0.5553072625698324,
"acc_norm_stderr": 0.016619881988177012
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.77491961414791,
"acc_stderr": 0.023720088516179027,
"acc_norm": 0.77491961414791,
"acc_norm_stderr": 0.023720088516179027
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.02240967454730417,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.02240967454730417
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5390070921985816,
"acc_stderr": 0.029736592526424445,
"acc_norm": 0.5390070921985816,
"acc_norm_stderr": 0.029736592526424445
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5586701434159062,
"acc_stderr": 0.012682016335646683,
"acc_norm": 0.5586701434159062,
"acc_norm_stderr": 0.012682016335646683
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7242647058823529,
"acc_stderr": 0.027146271936625162,
"acc_norm": 0.7242647058823529,
"acc_norm_stderr": 0.027146271936625162
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.017242385828779627,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.017242385828779627
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7877551020408163,
"acc_stderr": 0.026176967197866767,
"acc_norm": 0.7877551020408163,
"acc_norm_stderr": 0.026176967197866767
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8805970149253731,
"acc_stderr": 0.02292879327721974,
"acc_norm": 0.8805970149253731,
"acc_norm_stderr": 0.02292879327721974
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.0266405825391332,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.0266405825391332
},
"harness|truthfulqa:mc|0": {
"mc1": 0.40269277845777235,
"mc1_stderr": 0.01716883093518722,
"mc2": 0.5707095526544473,
"mc2_stderr": 0.01525040450448649
},
"harness|winogrande|5": {
"acc": 0.8176795580110497,
"acc_stderr": 0.010851565594267207
},
"harness|gsm8k|5": {
"acc": 0.6444275966641395,
"acc_stderr": 0.013185402252713852
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__WizardLM-Math-70B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.92|
|AI2 Reasoning Challenge (25-Shot)|67.06|
|HellaSwag (10-Shot) |86.01|
|MMLU (5-Shot) |69.14|
|TruthfulQA (0-shot) |57.07|
|Winogrande (5-shot) |81.77|
|GSM8k (5-shot) |64.44|
|
theprint/ReWiz-Worldbuilder-7B | theprint | 2024-11-03T04:42:51Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:theprint/ReWiz-7B",
"base_model:merge:theprint/ReWiz-7B",
"base_model:theprint/WorldBuilder-7B",
"base_model:merge:theprint/WorldBuilder-7B",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T04:32:04Z | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- theprint/WorldBuilder-7B
- theprint/ReWiz-7B
model-index:
- name: ReWiz-Worldbuilder-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 25.1
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Worldbuilder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 25.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Worldbuilder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.95
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Worldbuilder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.57
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Worldbuilder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.39
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Worldbuilder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 21.9
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-Worldbuilder-7B
name: Open LLM Leaderboard
---
<img src="https://huggingface.co/theprint/ReWiz-Llama-3.2-3B/resolve/main/ReWiz_banner.png">
Half the data was geared towards better reasoning (EvolKit-20k and reasoning-base-20k), the other half will help to de-censor the model (WizardLM data set).
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [theprint/WorldBuilder-7B](https://huggingface.co/theprint/WorldBuilder-7B)
* [theprint/ReWiz-7B](https://huggingface.co/theprint/ReWiz-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: theprint/ReWiz-7B
layer_range: [0, 32]
- model: theprint/WorldBuilder-7B
layer_range: [0, 32]
merge_method: slerp
base_model: theprint/ReWiz-7B
parameters:
t:
- filter: self_attn
value: [0.1, 0.5, 0.3, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.5, 0.7, 0.3, 0.1]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theprint__ReWiz-Worldbuilder-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |15.66|
|IFEval (0-Shot) |25.10|
|BBH (3-Shot) |25.08|
|MATH Lvl 5 (4-Shot)| 2.95|
|GPQA (0-shot) | 2.57|
|MuSR (0-shot) |16.39|
|MMLU-PRO (5-shot) |21.90|
|
theprint/ReWiz-Llama-3.1-8B | theprint | 2024-11-03T04:41:56Z | 115 | 1 | null | [
"safetensors",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"theprint",
"ReWiz",
"en",
"dataset:KingNish/reasoning-base-20k",
"dataset:arcee-ai/EvolKit-20k",
"dataset:cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-16T00:47:37Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- unsloth
- trl
- sft
- theprint
- ReWiz
datasets:
- KingNish/reasoning-base-20k
- arcee-ai/EvolKit-20k
- cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered
---
<img src="https://huggingface.co/theprint/ReWiz-Llama-3.2-3B/resolve/main/ReWiz_banner.png">
Half the data was geared towards better reasoning (EvolKit-20k and reasoning-base-20k), the other half will help to de-censor the model (WizardLM data set).
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
theprint/ReWiz-7B | theprint | 2024-11-03T04:40:57Z | 159 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"theprint",
"conversational",
"en",
"dataset:KingNish/reasoning-base-20k",
"dataset:arcee-ai/EvolKit-20k",
"dataset:cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-08T12:15:58Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- theprint
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
datasets:
- KingNish/reasoning-base-20k
- arcee-ai/EvolKit-20k
- cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered
model-index:
- name: ReWiz-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 40.48
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 23.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.57
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.36
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.74
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.56
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/ReWiz-7B
name: Open LLM Leaderboard
---
<img src="https://huggingface.co/theprint/ReWiz-Llama-3.2-3B/resolve/main/ReWiz_banner.png">
# ReWiz-7B
This is a fine tune of Mistral 7B Instruct (0.3). Half the data was geared towards better reasoning (EvolKit-20k and reasoning-base-20k), the other half will help to de-censor the model (WizardLM data set).
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theprint__ReWiz-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |17.54|
|IFEval (0-Shot) |40.48|
|BBH (3-Shot) |23.50|
|MATH Lvl 5 (4-Shot)| 2.57|
|GPQA (0-shot) | 3.36|
|MuSR (0-shot) |16.74|
|MMLU-PRO (5-shot) |18.56|
|
theprint/ReWiz-Nemo-12B-Instruct-GGUF | theprint | 2024-11-03T04:39:44Z | 323 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:theprint/ReWiz-Nemo-12B-Instruct",
"base_model:quantized:theprint/ReWiz-Nemo-12B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T02:15:41Z | ---
base_model:
- theprint/ReWiz-Nemo-12B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
<img src="https://huggingface.co/theprint/ReWiz-Llama-3.2-3B/resolve/main/ReWiz_banner.png">
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
GreenBitAI/Qwen-2.5-72B-Instruct-layer-mix-bpw-4.0 | GreenBitAI | 2024-11-03T04:15:47Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2024-10-29T04:04:17Z | ---
license: apache-2.0
---
|
waloneai/kathein-cartoon | waloneai | 2024-11-03T04:13:57Z | 7 | 3 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-03T04:13:53Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: kathein festival
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# kathein cartoon
<Gallery />
## Model description
kathein cartoon
## Trigger words
You should use `kathein festival` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shweaung/kathein-cartoon/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
mradermacher/OGNO-7B-GGUF | mradermacher | 2024-11-03T04:11:11Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Omningotex-7b-slerp",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"en",
"base_model:paulml/OGNO-7B",
"base_model:quantized:paulml/OGNO-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T05:03:16Z | ---
base_model: paulml/OGNO-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Omningotex-7b-slerp
- eren23/dpo-binarized-NeutrixOmnibe-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/paulml/OGNO-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-GGUF/resolve/main/OGNO-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OGNO-7B-i1-GGUF | mradermacher | 2024-11-03T04:11:11Z | 420 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Omningotex-7b-slerp",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"en",
"base_model:paulml/OGNO-7B",
"base_model:quantized:paulml/OGNO-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-03T01:57:18Z | ---
base_model: paulml/OGNO-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Omningotex-7b-slerp
- eren23/dpo-binarized-NeutrixOmnibe-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/paulml/OGNO-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OGNO-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OGNO-7B-i1-GGUF/resolve/main/OGNO-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AbdoulayeDIOP/lettuce-npk-vit | AbdoulayeDIOP | 2024-11-03T04:09:58Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-03T00:41:04Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: lettuce-npk-vit
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9523809523809523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/abdoulaye-diop/lettuce-npk-deficiency-prediction/runs/zjp8gyck)
# lettuce-npk-vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1492
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.1633 | 0.992 | 31 | 1.1239 | 0.8571 |
| 0.4802 | 1.984 | 62 | 0.4213 | 0.9048 |
| 0.1386 | 2.976 | 93 | 0.2501 | 0.9524 |
| 0.1003 | 4.0 | 125 | 0.1879 | 0.9524 |
| 0.0871 | 4.992 | 156 | 0.3482 | 0.8571 |
| 0.0702 | 5.984 | 187 | 0.2048 | 0.9524 |
| 0.0594 | 6.976 | 218 | 0.2824 | 0.9048 |
| 0.0425 | 8.0 | 250 | 0.2567 | 0.9524 |
| 0.0398 | 8.992 | 281 | 0.3363 | 0.8571 |
| 0.0348 | 9.984 | 312 | 0.2518 | 0.9524 |
| 0.0411 | 10.9760 | 343 | 0.0369 | 1.0 |
| 0.0445 | 12.0 | 375 | 0.2288 | 0.9524 |
| 0.0353 | 12.992 | 406 | 0.2364 | 0.8571 |
| 0.0384 | 13.984 | 437 | 0.2255 | 0.9524 |
| 0.0331 | 14.9760 | 468 | 0.0572 | 1.0 |
| 0.0252 | 16.0 | 500 | 0.2103 | 0.9524 |
| 0.0337 | 16.992 | 531 | 0.0295 | 1.0 |
| 0.0302 | 17.984 | 562 | 0.2805 | 0.9048 |
| 0.0328 | 18.976 | 593 | 0.2127 | 0.9524 |
| 0.0315 | 19.84 | 620 | 0.1492 | 0.9524 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
bartowski/Mistral-Crab-DPO-GGUF | bartowski | 2024-11-03T04:05:34Z | 93 | 0 | null | [
"gguf",
"alignment-handbook",
"generated_from_trainer",
"text-generation",
"en",
"base_model:THU-KEG/Mistral-Crab-DPO",
"base_model:quantized:THU-KEG/Mistral-Crab-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-03T03:09:31Z | ---
base_model: THU-KEG/Mistral-Crab-DPO
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- alignment-handbook
- generated_from_trainer
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Mistral-Crab-DPO
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4014">b4014</a> for quantization.
Original model: https://huggingface.co/THU-KEG/Mistral-Crab-DPO
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Mistral-Crab-DPO-f16.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-f16.gguf) | f16 | 14.50GB | false | Full F16 weights. |
| [Mistral-Crab-DPO-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q8_0.gguf) | Q8_0 | 7.70GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Mistral-Crab-DPO-Q6_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q6_K_L.gguf) | Q6_K_L | 6.01GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Mistral-Crab-DPO-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q6_K.gguf) | Q6_K | 5.95GB | false | Very high quality, near perfect, *recommended*. |
| [Mistral-Crab-DPO-Q5_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q5_K_L.gguf) | Q5_K_L | 5.22GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Mistral-Crab-DPO-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q5_K_M.gguf) | Q5_K_M | 5.14GB | false | High quality, *recommended*. |
| [Mistral-Crab-DPO-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q5_K_S.gguf) | Q5_K_S | 5.00GB | false | High quality, *recommended*. |
| [Mistral-Crab-DPO-Q4_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_K_L.gguf) | Q4_K_L | 4.47GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Mistral-Crab-DPO-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_K_M.gguf) | Q4_K_M | 4.37GB | false | Good quality, default size for must use cases, *recommended*. |
| [Mistral-Crab-DPO-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_K_S.gguf) | Q4_K_S | 4.14GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Mistral-Crab-DPO-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0.gguf) | Q4_0 | 4.13GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Mistral-Crab-DPO-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.11GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Mistral-Crab-DPO-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.11GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [Mistral-Crab-DPO-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.11GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [Mistral-Crab-DPO-Q3_K_XL.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_XL.gguf) | Q3_K_XL | 3.94GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Mistral-Crab-DPO-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ4_XS.gguf) | IQ4_XS | 3.91GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Mistral-Crab-DPO-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_L.gguf) | Q3_K_L | 3.83GB | false | Lower quality but usable, good for low RAM availability. |
| [Mistral-Crab-DPO-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_M.gguf) | Q3_K_M | 3.52GB | false | Low quality. |
| [Mistral-Crab-DPO-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ3_M.gguf) | IQ3_M | 3.29GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Mistral-Crab-DPO-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q3_K_S.gguf) | Q3_K_S | 3.17GB | false | Low quality, not recommended. |
| [Mistral-Crab-DPO-IQ3_XS.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ3_XS.gguf) | IQ3_XS | 3.02GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Mistral-Crab-DPO-Q2_K_L.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q2_K_L.gguf) | Q2_K_L | 2.85GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Mistral-Crab-DPO-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-Q2_K.gguf) | Q2_K | 2.72GB | false | Very low quality but surprisingly usable. |
| [Mistral-Crab-DPO-IQ2_M.gguf](https://huggingface.co/bartowski/Mistral-Crab-DPO-GGUF/blob/main/Mistral-Crab-DPO-IQ2_M.gguf) | IQ2_M | 2.50GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Mistral-Crab-DPO-GGUF --include "Mistral-Crab-DPO-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Mistral-Crab-DPO-GGUF --include "Mistral-Crab-DPO-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Mistral-Crab-DPO-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/LWM-Text-Chat-512K-GGUF | mradermacher | 2024-11-03T04:05:10Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LargeWorldModel/LWM-Text-Chat-512K",
"base_model:quantized:LargeWorldModel/LWM-Text-Chat-512K",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T04:50:50Z | ---
base_model: LargeWorldModel/LWM-Text-Chat-512K
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LargeWorldModel/LWM-Text-Chat-512K
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LWM-Text-Chat-512K-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-Chat-512K-GGUF/resolve/main/LWM-Text-Chat-512K.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Kimberlywww/bert-finetuned-squad | Kimberlywww | 2024-11-03T03:59:37Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-11-03T02:01:09Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
kiwi1229/cross_encoder_distil_deberta_3 | kiwi1229 | 2024-11-03T03:45:52Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-03T03:45:31Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlekseyCalvin/Mayakovsky_Posters_2_5kSt | AlekseyCalvin | 2024-11-03T03:19:32Z | 9 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"image-generation",
"flux-diffusers",
"schnell",
"photo",
"Mayakovsky",
"art",
"poster",
"realism",
"character",
"historical person",
"poetry",
"literature",
"history",
"archival",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-11-03T00:21:33Z | ---
license: apache-2.0
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
- image-generation
- flux-diffusers
- schnell
- photo
- Mayakovsky
- art
- poster
- realism
- character
- historical person
- poetry
- literature
- history
- archival
base_model: "black-forest-labs/FLUX.1-schnell"
pipeline_tag: text-to-image
library_name: diffusers
emoji: 🔜
instance_prompt: MAYAK style Constructivist Poster
widget:
- text: MAYAK style drawing of Osip Mandelshtam reciting /OH, BUT PETERSBURG! NO! IM NOT READY TO DIE! YOU STILL HOLD ALL THE TELEPHONE NUMBERS OF MINE!/
output:
url: 1730591317624__000005000_3.jpg
- text: >-
(Post-Gen Edited Text) MAYAK style drawing of aging poet Osip Mandelstam reading in a trolley in Leningrad /IVE RETURNED TO MY CITY, I KNOW IT TO TEARS, TINY VEINS, & TO GLANDS, SWOLLEN UP LIKE A KID'S!/
output:
url: OMlend.jpg
- text: >-
MAYAK style satyrical Constructivist book cover panel featuring Vladimir Mayakovsky in a suit, yelling out from the top of an American skyscraper to top-hatted capitalists below the following text: "AND YOU? COULD YOU HAVE?", stylized book art
output:
url: Ay_2.jpg
- text: >-
MAYAK style art of poet Mandelstam reading /YOU'VE RETURNED HERE, SO SWALLOW THEN, FAST AS YOU MIGHT, ALL THE FISH OIL OF LENINGRAD'S RIVERINE LIGHT!/
output:
url: OmRIver5k.jpg
---
<Gallery />
# Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA – Version 2 – by SOON®
Trained via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
This second version of the resultent LoRA was trained on our custom Schnell-based checkpoint (Historic Color 2), available [here in fp8 Safetensors](https://huggingface.co/AlekseyCalvin/HistoricColorSoonrFluxV2/tree/main) and [here in Diffusers format](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers). <br>
The training went for 5000 steps at a DiT Learning Rate of .00002, batch 1, with the ademamix8bit optimizer, and both text encoders trained alongside the DiT!<br>
No synthetic data was used for the training, nor any auto-generated captions! Everything was manually and attentively pre-curated with a deep respect for the sources used. <br>
This is a **rank-32/alpha-64 Constructivist Art & Soviet Satirical Cartoon LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...) <br>
Check out our **translations of Mayakovsky's verse-works**, adapted from a proto-Soviet song-tongue into a Worldish one...<br>
And found, along with many other poets' songs and tomes...
Over **at [SilverAgePoets.com](https://www.silveragepoets.com)!**
## Trigger words
You should use `MAYAK style` or `MAYAK style Constructivist poster art` or `MAYAK style satyrical drawing` or suchlike to summon the poet's deathlessly pens and brushes.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AlekseyCalvin/Mayakovsky_Posters_2_5kSt', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
rahulvk007/CustomLlama | rahulvk007 | 2024-11-03T03:08:58Z | 86 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-03T03:04:32Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rehan1234/whispher-small-gujarati | Rehan1234 | 2024-11-03T03:02:45Z | 7 | 0 | null | [
"safetensors",
"whisper",
"whisper-event",
"gu",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-18T09:01:48Z | ---
language:
- gu
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Gujarati Small - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: gu_in
split: test
metrics:
- type: wer
value: 14.73
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Gujarati Small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Gujarati data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-gujarati-small", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="gu", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-gujarati-small", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="gu", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#gujarati-labelled-total-duration-is-430-hours)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- [OpenSLR](https://www.openslr.org/78/)
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- training_steps: 20532 (terminated upon convergence. Initially set to 21240 steps)
- mixed_precision_training: True
|
mradermacher/LWM-Text-256K-GGUF | mradermacher | 2024-11-03T03:02:11Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LargeWorldModel/LWM-Text-256K",
"base_model:quantized:LargeWorldModel/LWM-Text-256K",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T04:28:42Z | ---
base_model: LargeWorldModel/LWM-Text-256K
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LargeWorldModel/LWM-Text-256K
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LWM-Text-256K-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LWM-Text-256K-GGUF/resolve/main/LWM-Text-256K.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
streamize/leosam5-lightning | streamize | 2024-11-03T02:58:38Z | 49 | 0 | diffusers | [
"diffusers",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-03T02:56:31Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kayfour/T3Q-ko-solar-dpo-v8-testv1 | kayfour | 2024-11-03T02:56:21Z | 12 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-11-01T05:39:01Z | ---
license: apache-2.0
---
Model Card for Model ID
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
Developed by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Model type: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
Repository: [More Information Needed]
Paper [optional]: [More Information Needed]
Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Hardware Type: [More Information Needed]
Hours used: [More Information Needed]
Cloud Provider: [More Information Needed]
Compute Region: [More Information Needed]
Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed] |
mradermacher/DPOB-INMTOB-7B-GGUF | mradermacher | 2024-11-03T02:37:13Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Omningotex-7b-slerp",
"eren23/merged-dpo-binarized-NeutrixOmnibe-7B",
"en",
"base_model:paulml/DPOB-INMTOB-7B",
"base_model:quantized:paulml/DPOB-INMTOB-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T03:49:51Z | ---
base_model: paulml/DPOB-INMTOB-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Omningotex-7b-slerp
- eren23/merged-dpo-binarized-NeutrixOmnibe-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/paulml/DPOB-INMTOB-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DPOB-INMTOB-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DPOB-INMTOB-7B-GGUF/resolve/main/DPOB-INMTOB-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xu-Ouyang/pythia-2.8b-deduped-int3-step16-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T02:36:22Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-03T02:33:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
osoriohans/bert_ciberseguridad_1 | osoriohans | 2024-11-03T02:29:08Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-03T02:28:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tele-AI/TeleChat2-115B | Tele-AI | 2024-11-03T02:09:42Z | 7 | 0 | null | [
"safetensors",
"telechat",
"custom_code",
"arxiv:2104.09864",
"arxiv:2002.05202",
"arxiv:1910.07467",
"arxiv:2401.03804",
"license:apache-2.0",
"region:us"
] | null | 2024-09-20T05:50:48Z | ---
license: apache-2.0
---
<div align="center">
<h1>
星辰语义大模型-TeleChat2
</h1>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/Tele-AI" target="_blank">Hugging Face</a> • 🤖 <a href="https://modelscope.cn/organization/TeleAI" target="_blank">ModelScope</a> • 🏔 <a href="https://gitee.com/mindspore/mindformers/tree/dev/research/telechat" target="_blank">MindSpore</a> • 🐾 <a href="https://gitee.com/Tele-AI/tele-chat" target="_blank">gitee</a>️ • 💬 <a href="https://github.com/Tele-AI/Telechat/blob/master/images/wechat.jpg" target="_blank">WeChat</a>
</p>
# 目录
- [模型介绍](#模型介绍)
- [效果评测](#效果评测)
- [声明、协议、引用](#声明协议引用)
# 最新动态
- 2024.9.20 开源TeleChat2-115B模型,该模型是**首个完全国产算力训练并开源的千亿参数模型**。
# 模型介绍
### 星辰语义大模型-TeleChat2
- 星辰语义大模型**TeleChat2**是由中国电信人工智能研究院研发训练的大语言模型,该系列模型**完全基于国产算力训练**。
- 本次开源**TeleChat2-115B**模型采用10万亿 Tokens中英文高质量语料进行训练,同步开源对话模型**TeleChat2-115B**的多格式、多平台权重文件。
- **TeleChat2**在训练数据、训练方法等方面进行了改进,在通用问答和知识类、代码类、数学类榜单上相比**TeleChat1**均有大幅提升。
- **TeleChat2**完全基于国产算力和国产深度学习框架进行训练,算力和算法框架更自主可控。优化MP、PP、SP实现方式提升模型性能,优化算子来提升训练速度。
- 我们使用大量小模型实验来验证scaling law规律,在不同模型结构、不同数据配比和数据清洗方式中寻找最优设计。
- 采用RingAttention及其他序列切分方式,实现长文训练性能提升;通过ntk-aware+attention-scaling的方式保证训练长度切换时的平稳过渡,以此来保证模型在不同长度数据下的训练效果。
- 在微调数据方面,我们进行了指令复杂性提升与多样性扩充,通过数据合成和人工标注生成高质量数据,并使用拒绝采样生成多样的推理路径;通过研究一套基于base模型反向选择偏好对齐数据方案,基于适配数据最大限度提升模型效果。
- 通用能力较TeleChat系列模型提升超过29%,在逻辑推理、总结摘要、长文写作和数学计算上均有大幅提升。
### 模型结构
我们采用标准的 `Decoder-only` 结构设计了 **TeleChat2** 模型,使用 [Rotary Embedding](https://arxiv.org/pdf/2104.09864.pdf)
的位置编码方法、使用 [SwiGLU](https://arxiv.org/pdf/2002.05202.pdf)
激活函数来替代GELU激活函数、使用基于 [RMSNorm](https://arxiv.org/abs/1910.07467) 的 Pre-Normalization进行层标准化操作。我们将**TeleChat2**的词嵌入层和输出lm
head层参数分开,有助于增强训练稳定性和收敛性。我们选择了GQA以节约attention部分的参数量和计算量、提升训练和推理速度。
**TeleChat2**的模型结构配置如下表所示:
| | layer_num | hidden_size | ffn_hidden_size | head_num | tie_word_embeddings | GQA |
| ---- | --------- | ----------- | --------------- | -------- | ------------------- | ---- |
| 115B | 96 | 8192 | 40960 | 64 | 否 | 是 |
# 效果评测
TeleChat模型相比同规模模型在评测效果方面也有较好的表现,我们的评测集涵盖了包括MMLU、C-Eval、CMMLU、
GSM8K、MATH、HumanEval、BBH等数据集,评测能力包括了指令遵循、考试能力、数学计算和推理、代码生成等
## 评测集介绍
### 通用能力
- MMLU 数据集是一个全面的英文评测数据集,涵盖了 57 个学科,包括人文学科、社会科学、自然科学、初等数学、美国历史、计算机科学、法律等等。
- CEVAL 数据集是一个全面的中文评估测试集,包括初中、高中、大学和专业难度级别的多项选择题,涵盖了 52 个不同的学科领域。
- CMMLU 数据集同样是一个全面的中文评估测试集,涵盖了从基础学科到高级专业水平的67个主题。
### 推理和代码能力
- GSM8K 数据集包含了8.5K高质量的小学数学题,能够评估语言模型在数学推理能力上的表现。
- HumanEval 数据集是一个由openai提供的代码能力测试数据集,它由 164 个编程问题组成,要求根据给定的问题和代码模板,生成正确的代码片段。
- BBH 数据集全名为BIG-Bench Hard(BBH),包含23个具有挑战性的BIG-Bench任务,均为之前的语言模型评估中没有超过平均人类评审者表现的任务。
- MBPP 数据集包含大约1000个众包的Python编程问题,涵盖编程基础知识、标准库功能等。每个问题包括任务描述、代码解决方案和3个自动化测试用例。
### 主观题能力
- [AlignBench](https://github.com/THUDM/AlignBench)是一个多维度全面评估中文大模型对齐水平的评测基准,包含638道单轮主观评测题。
- [MT-bench](https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/README.md)是一个用于评估聊天助手的具有挑战性的多轮开放式问题集,包含80通多轮主观评测题。
### 指令遵循能力
- [IFEval](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/ifeval/README.md)旨在评估语言模型对指令的精确遵循能力,它包含了500条可精确验证的指令,是Open
LLM Leaderboard中使用的核心基准测试之一。
## 评测结果如下
| Dataset | Llama-3.1-70B | Qwen1.5-110B | Qwen2-72-instruct | DeepSeek-v2 | TeleChat2-115B |
|:----------:|:-------------:|:------------:|:-----------------:|:-----------:|:--------------:|
| C-Eval | - | - | 83.8 | 78 | **86.9** |
| MMLU | **86** | 80.4 | 82.3 | 77.8 | 80.9 |
| CMMLU | 69.01 | 87.64 | 87.47 | 81.6 | **89.94** |
| GSM8K | **95.1** | 85.4 | 91.1 | 92.2 | 92.2 |
| HumanEval | 80.5 | 52.4 |**86** | 81.1 | 75 |
| BBH | - | 74.8 | - | 79.7 | **89.04** |
| MBPP | **86** | 58.1 | 80.2 | 72 | 78 |
| AlignBench | - | 7.86 | **8.27** | 7.91 | 8.03 |
| MT-bench | 8.79 | 8.88 | **9.12** | 8.97 | 8.89 |
| IFEval | **87.5** | - | 77.6 | 63.8 | 82.81 |
# 模型推理和部署
### 模型推理
当前模型推理兼容了单卡和多卡推理,以及针对长文推理做了部分优化工作。
**模型推理方法示范**
```python
>> > import os
>> > import torch
>> > from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
>> > tokenizer = AutoTokenizer.from_pretrained('../models/115B', trust_remote_code=True)
>> > model = AutoModelForCausalLM.from_pretrained('../models/115B', trust_remote_code=True, device_map="auto",
torch_dtype=torch.float16)
>> > generate_config = GenerationConfig.from_pretrained('../models/115B')
>> > question = "生抽与老抽的区别?"
>> > answer, history = model.chat(tokenizer=tokenizer, question=question, history=[], generation_config=generate_config,
stream=False)
>> > print(answer)
生抽和老抽是两种不同的酱油,它们在风味、色泽和用途上都有所区别。
1.颜色:生抽的颜色比较淡,而老抽的颜色较深。生抽的颜色呈红褐色或棕红色,而老抽的颜色则呈棕黑色。
2.味道:生抽具有鲜美的咸味和微甜的味浅,而老抽浓郁,颜色较深。根据个人口味和烹饪需求选择不同的酱油类型可以获得更好的口感和菜肴效果。
```
# 国产化适配
### 昇腾Atlas 800T A2训练服务器+昇思MindSpore框架: 训练、推理适配
当前星辰语义大模型TeleChat2支持昇腾Atlas 800T A2训练服务器,可基于昇思MindSpore框架进行模型训练和评测。
- 性能方面,具体对比如下:
-
| NAME | performance(samples/p/s) | Epochs | AMP_Type |
|--------------------------| ---------------------: | ------ | -------: |
| 115B | 0.0192 | 1 | O1 |
| 115B | 0.0174 | 1 | O2 |
说明:建议采用8台一组进行训练
- TeleChat支持昇腾Atlas 800T A2训练服务器,可基于昇思MindSpore框架进行模型训练,训练所需的modeling、README、
脚本已发布:[TeleChat-MindSpore](https://gitee.com/mindspore/mindformers/tree/dev/research/telechat)
# 声明、协议、引用
### 声明
我们在此声明,不要使用TeleChat模型及其衍生模型进行任何危害国家社会安全或违法的活动。同时,我们也要求使用者不要将TeleChat模型用于没有安全审查和备案的互联网服务。我们希望所有使用者遵守上述原则,确保科技发展在合法合规的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用TeleChat开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
### 协议
社区使用 TeleChat 模型需要遵循《[TeleChat模型社区许可协议](./TeleChat模型社区许可协议.pdf)》。TeleChat模型支持商业用途,如果您计划将 TeleChat
模型或其衍生品用于商业目的,您需要通过以下联系邮箱
[email protected],提交《TeleChat模型社区许可协议》要求的申请材料。审核通过后,将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
### 引用
如需引用我们的工作,请使用如下 reference:
```
@misc{wang2024telechat,
title={TeleChat Technical Report},
author={Zihan Wang and Xinzhang Liu and Shixuan Liu and Yitong Yao and Yuyao Huang and Zhongjiang He and Xuelong Li and Yongxiang Li and Zhonghao Che and Zhaoxi Zhang and Yan Wang and Xin Wang and Luwen Pu and Huihan Xu and Ruiyu Fang and Yu Zhao and Jie Zhang and Xiaomeng Huang and Zhilong Lu and Jiaxin Peng and Wenjun Zheng and Shiquan Wang and Bingkai Yang and Xuewei he and Zhuoru Jiang and Qiyi Xie and Yanhan Zhang and Zhongqiu Li and Lingling Shi and Weiwei Fu and Yin Zhang and Zilu Huang and Sishi Xiong and Yuxiang Zhang and Chao Wang and Shuangyong Song},
year={2024},
eprint={2401.03804},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Xu-Ouyang/pythia-2.8b-deduped-int4-step8-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T02:08:38Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-03T02:08:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF | featherless-ai-quants | 2024-11-03T02:01:46Z | 39 | 0 | null | [
"gguf",
"text-generation",
"base_model:TroyDoesAI/Mermaid-Coder-MoE-Expert1",
"base_model:quantized:TroyDoesAI/Mermaid-Coder-MoE-Expert1",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-03T01:01:46Z | ---
base_model: TroyDoesAI/Mermaid-Coder-MoE-Expert1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TroyDoesAI/Mermaid-Coder-MoE-Expert1 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q8_0.gguf) | 10875.85 MB |
| Q4_K_S | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q4_K_S.gguf) | 5835.08 MB |
| Q2_K | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q2_K.gguf) | 3817.78 MB |
| Q6_K | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q6_K.gguf) | 8397.30 MB |
| Q3_K_M | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q3_K_M.gguf) | 4954.98 MB |
| Q3_K_S | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q3_K_S.gguf) | 4448.48 MB |
| Q3_K_L | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q3_K_L.gguf) | 5388.98 MB |
| Q4_K_M | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q4_K_M.gguf) | 6162.33 MB |
| Q5_K_S | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q5_K_S.gguf) | 7054.70 MB |
| Q5_K_M | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-Q5_K_M.gguf) | 7245.95 MB |
| IQ4_XS | [TroyDoesAI-Mermaid-Coder-MoE-Expert1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid-Coder-MoE-Expert1-GGUF/blob/main/TroyDoesAI-Mermaid-Coder-MoE-Expert1-IQ4_XS.gguf) | 5557.67 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF | mradermacher | 2024-11-03T02:00:10Z | 316 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"dpo",
"conversation",
"text-generation-inference",
"Kukedlc/NeuTrixOmniBe-7B-model-remix",
"en",
"dataset:argilla/OpenHermes2.5-dpo-binarized-alpha",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:quantized:eren23/dpo-binarized-NeutrixOmnibe-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-02T23:42:30Z | ---
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
datasets:
- argilla/OpenHermes2.5-dpo-binarized-alpha
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- dpo
- conversation
- text-generation-inference
- Kukedlc/NeuTrixOmniBe-7B-model-remix
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/dpo-binarized-NeutrixOmnibe-7B-i1-GGUF/resolve/main/dpo-binarized-NeutrixOmnibe-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DanJoshua/profesor_Swin3D_B_VIOPERU | DanJoshua | 2024-11-03T01:50:57Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-11-03T01:11:52Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: profesor_Swin3D_B_VIOPERU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# profesor_Swin3D_B_VIOPERU
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4563
- Accuracy: 0.8571
- F1: 0.8571
- Precision: 0.8571
- Recall: 0.8571
- Roc Auc: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 84
- training_steps: 560
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 1.3718 | 2.0161 | 37 | 0.6728 | 0.6607 | 0.6166 | 0.7979 | 0.6607 | 0.7666 |
| 1.307 | 5.0071 | 74 | 0.6388 | 0.8393 | 0.8367 | 0.8619 | 0.8393 | 0.8367 |
| 1.2244 | 7.0232 | 111 | 0.5896 | 0.8036 | 0.8020 | 0.8136 | 0.8036 | 0.8508 |
| 1.0091 | 10.0143 | 148 | 0.5448 | 0.8214 | 0.8205 | 0.8281 | 0.8214 | 0.8648 |
| 0.8839 | 13.0054 | 185 | 0.4877 | 0.8393 | 0.8380 | 0.8505 | 0.8393 | 0.8763 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
lukaspetersson/flux_oxe | lukaspetersson | 2024-11-03T01:48:34Z | 5 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-03T01:48:31Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Flux_Oxe
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lukaspetersson/flux_oxe', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Xu-Ouyang/pythia-2.8b-deduped-int3-step8-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T01:46:23Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-03T01:42:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/WestLake-7B-v2-laser-i1-GGUF | mradermacher | 2024-11-03T01:46:10Z | 69 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cognitivecomputations/WestLake-7B-v2-laser",
"base_model:quantized:cognitivecomputations/WestLake-7B-v2-laser",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-03T00:35:42Z | ---
base_model: cognitivecomputations/WestLake-7B-v2-laser
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WestLake-7B-v2-laser-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WestLake-7B-v2-laser-i1-GGUF/resolve/main/WestLake-7B-v2-laser.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kurapika993/sentiment | Kurapika993 | 2024-11-03T01:36:52Z | 182 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T23:46:31Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2664 | 1.0 | 1563 | 0.2394 | 0.9067 |
| 0.2312 | 2.0 | 3126 | 0.2318 | 0.9149 |
| 0.2173 | 3.0 | 4689 | 0.2231 | 0.9179 |
| 0.2011 | 4.0 | 6252 | 0.2271 | 0.9184 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF | mradermacher | 2024-11-03T01:29:14Z | 38 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp",
"base_model:quantized:Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T03:37:59Z | ---
base_model: Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF | mradermacher | 2024-11-03T01:29:10Z | 16 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp",
"base_model:quantized:Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-03T00:19:13Z | ---
base_model: Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenHermes-2.5-neural-chat-v3-2-Slerp-i1-GGUF/resolve/main/OpenHermes-2.5-neural-chat-v3-2-Slerp.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mtzig/tinyllama_run | mtzig | 2024-11-03T01:28:04Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"token-classification",
"generated_from_trainer",
"base_model:TinyPixel/small-llama2",
"base_model:finetune:TinyPixel/small-llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-03T01:21:17Z | ---
library_name: transformers
base_model: TinyPixel/small-llama2
tags:
- generated_from_trainer
model-index:
- name: tinyllama_run
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama_run
This model is a fine-tuned version of [TinyPixel/small-llama2](https://huggingface.co/TinyPixel/small-llama2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 25
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.1
|
featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF | featherless-ai-quants | 2024-11-03T01:26:11Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:TroyDoesAI/Mermaid_PythonCoder",
"base_model:quantized:TroyDoesAI/Mermaid_PythonCoder",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-03T00:54:50Z | ---
base_model: TroyDoesAI/Mermaid_PythonCoder
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TroyDoesAI/Mermaid_PythonCoder GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [TroyDoesAI-Mermaid_PythonCoder-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q8_0.gguf) | 10875.85 MB |
| Q4_K_S | [TroyDoesAI-Mermaid_PythonCoder-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q4_K_S.gguf) | 5835.08 MB |
| Q2_K | [TroyDoesAI-Mermaid_PythonCoder-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q2_K.gguf) | 3817.78 MB |
| Q6_K | [TroyDoesAI-Mermaid_PythonCoder-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q6_K.gguf) | 8397.30 MB |
| Q3_K_M | [TroyDoesAI-Mermaid_PythonCoder-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q3_K_M.gguf) | 4954.97 MB |
| Q3_K_S | [TroyDoesAI-Mermaid_PythonCoder-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q3_K_S.gguf) | 4448.47 MB |
| Q3_K_L | [TroyDoesAI-Mermaid_PythonCoder-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q3_K_L.gguf) | 5388.97 MB |
| Q4_K_M | [TroyDoesAI-Mermaid_PythonCoder-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q4_K_M.gguf) | 6162.33 MB |
| Q5_K_S | [TroyDoesAI-Mermaid_PythonCoder-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q5_K_S.gguf) | 7054.70 MB |
| Q5_K_M | [TroyDoesAI-Mermaid_PythonCoder-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-Q5_K_M.gguf) | 7245.95 MB |
| IQ4_XS | [TroyDoesAI-Mermaid_PythonCoder-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TroyDoesAI-Mermaid_PythonCoder-GGUF/blob/main/TroyDoesAI-Mermaid_PythonCoder-IQ4_XS.gguf) | 5557.67 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/LaseredHermes-7B-i1-GGUF | mradermacher | 2024-11-03T01:23:11Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:teknium/openhermes",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Intel/orca_dpo_pairs",
"base_model:LHC88/LaseredHermes-7B",
"base_model:quantized:LHC88/LaseredHermes-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T22:30:19Z | ---
base_model: LHC88/LaseredHermes-7B
datasets:
- teknium/openhermes
- allenai/ultrafeedback_binarized_cleaned
- Intel/orca_dpo_pairs
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LHC88/LaseredHermes-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LaseredHermes-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LaseredHermes-7B-i1-GGUF/resolve/main/LaseredHermes-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Xu-Ouyang/pythia-2.8b-deduped-int4-step4-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T01:20:50Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-03T01:20:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zixianma/mma_mantis_mix_586k-seq_len_8192-lr_1e-5-gl_bs_128-ep_1 | zixianma | 2024-11-03T01:12:57Z | 9 | 0 | null | [
"safetensors",
"llava",
"generated_from_trainer",
"base_model:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"base_model:finetune:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"license:llama3",
"region:us"
] | null | 2024-11-02T06:15:24Z | ---
license: llama3
base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind
tags:
- generated_from_trainer
model-index:
- name: mma_mantis_mix_586k-seq_len_8192-lr_1e-5-gl_bs_128-ep_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://salesforceairesearch.wandb.io/jianguozhang/Mantis/runs/cgv4spj9)
# mma_mantis_mix_586k-seq_len_8192-lr_1e-5-gl_bs_128-ep_1
This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.43.0
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-160m-deduped-int4-step128-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T01:12:12Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-02T04:51:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kiwi1229/cross_encoder_distil_roberta_base_2 | kiwi1229 | 2024-11-03T00:59:19Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-03T00:59:08Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
faizalbs777/news-summarization-finetuned-mistral-7b | faizalbs777 | 2024-11-03T00:54:23Z | 5 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T09:19:10Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: news-summarization-finetuned-mistral-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news-summarization-finetuned-mistral-7b
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1549 | 0.0109 | 20 | 2.1256 |
| 2.0143 | 0.0219 | 40 | 2.0855 |
| 2.0595 | 0.0328 | 60 | 2.0559 |
| 1.9444 | 0.0437 | 80 | 2.0500 |
| 2.1425 | 0.0547 | 100 | 2.0477 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.4.1+cu121
- Datasets 2.16.0
- Tokenizers 0.20.0 |
mradermacher/MS-Schisandra-22B-vA2-i1-GGUF | mradermacher | 2024-11-03T00:51:15Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T21:15:50Z | ---
base_model: Nohobby/MS-Schisandra-22B-vA2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nohobby/MS-Schisandra-22B-vA2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF/resolve/main/MS-Schisandra-22B-vA2.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MS-Schisandra-22B-vA2-GGUF | mradermacher | 2024-11-03T00:51:14Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-02T16:48:36Z | ---
base_model: Nohobby/MS-Schisandra-22B-vA2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nohobby/MS-Schisandra-22B-vA2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA2-GGUF/resolve/main/MS-Schisandra-22B-vA2.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Mavkif/urdu-mt5-mmarco | Mavkif | 2024-11-03T00:47:25Z | 9 | 0 | null | [
"safetensors",
"mt5",
"Information Retrieval",
"Natural Language Processing",
"Question Answering",
"base_model:unicamp-dl/mt5-base-mmarco-v2",
"base_model:finetune:unicamp-dl/mt5-base-mmarco-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-11-02T12:34:55Z | ---
metrics:
- Recall @10 0.438
- MRR @10 0.247
base_model:
- unicamp-dl/mt5-base-mmarco-v2
tags:
- Information Retrieval
- Natural Language Processing
- Question Answering
license: apache-2.0
---
# Urdu mT5 msmarco: Fine-Tuned mT5 Model for Urdu Information Retrieval
As part of ongoing efforts to make Information Retrieval (IR) more inclusive, this model addresses the needs of low-resource languages, focusing specifically on Urdu.
We created this model by translating the MS-Marco dataset into Urdu using the IndicTrans2 model.
To establish baseline performance, we initially tested for zero-shot learning for IR in Urdu using the unicamp-dl/mt5-base-mmarco-v2 model
and then applied fine-tuning with the mMARCO multilingual IR methodology on the translated dataset, resulting in State-Of-The-Art results for urdu IR
## Model Details
### Model Description
- **Developed by:** Umer Butt
- **Model type:** IR model for reranking
- **Language(s) (NLP):** Python/pytorch
## Uses
### Direct Use
## Bias, Risks, and Limitations
Although this model performs well and is state-of-the-art for now. But still this model is finetuned on mmarco model and a translated dataset(which was created using indicTrans2 model). Hence the limitations of those apply here too.
## Evaluation
The evaluation was done using the scripts in the pygaggle library. Specifically these files:
evaluate_monot5_reranker.py
ms_marco_eval.py
#### Metrics
Following the approach in the mmarco work. The same two metrics were used.
Recal @10 : 0.438
MRR @10 : 0.247
### Results
| Model | Name | Data | Recall@10 | MRR@10 | Queries Ranked |
|---------------------------------------|---------------------------------------|--------------|-----------|--------|----------------|
| bm25 (k = 1000) | BM25 - Baseline from mmarco paper | English data | 0.391 | 0.187 | 6980 |
| unicamp-dl/mt5-base-mmarco-v2 | mmarco reranker - Baseline from paper | English data | | 0.370 | 6980 |
| bm25 (k = 1000) | BM25 | Urdu data | 0.2675 | 0.129 | 6980 |
| unicamp-dl/mt5-base-mmarco-v2 | Zero-shot mmarco | Urdu data | 0.408 | 0.204 | 6980 |
| This work | Mavkif/urdu-mt5-mmarco | Urdu data | 0.438 | 0.247 | 6980 |
### Model Architecture and Objective
{
"_name_or_path": "unicamp-dl/mt5-base-mmarco-v2",
"architectures": ["MT5ForConditionalGeneration"],
"d_model": 768,
"num_heads": 12,
"num_layers": 12,
"dropout_rate": 0.1,
"vocab_size": 250112,
"model_type": "mt5",
"transformers_version": "4.38.2"
}
For more details on how to customize the decoding parameters (such as max_length, num_beams, and early_stopping), refer to the Hugging Face documentation.
## How to Get Started with the Model
Example Code for Scoring Query-Document Pairs:
In an IR setting, you provide a query and one or more candidate documents. The model scores each document for relevance to the query, which can be used for ranking.
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
import torch.nn.functional as F
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Mavkif/urdu-mt5-mmarco")
model = AutoModelForSeq2SeqLM.from_pretrained("Mavkif/urdu-mt5-mmarco")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def rank_documents(query, documents):
# Create input pairs of query and documents
query_document_pairs = [f"{query} [SEP] {doc}" for doc in documents]
# Tokenize the input pairs
inputs = tokenizer(query_document_pairs, padding=True, truncation=True, return_tensors="pt", max_length=512)
inputs = {k: v.to(device) for k, v in inputs.items()}
# Generate decoder input ids (starting with the decoder start token)
decoder_input_ids = torch.full(
(inputs["input_ids"].shape[0], 1), model.config.decoder_start_token_id, dtype=torch.long, device=device
)
# Perform inference to get the logits
with torch.no_grad():
outputs = model(**inputs, decoder_input_ids=decoder_input_ids)
# Get the logits for the sequence output
logits = outputs.logits
# Extract the probabilities for the generated sequence
scores = []
for idx, doc in enumerate(documents):
# Calculate the softmax over the entire vocabulary for each token in the sequence
doc_logits = logits[idx]
doc_probs = F.softmax(doc_logits, dim=-1)
# Get the probability score for "ہاں" token in the output sequence
token_true_id = tokenizer.convert_tokens_to_ids("ہاں")
token_probs = doc_probs[:, token_true_id]
sum_prob = token_probs.sum().item() # Sum probability over the sequence
scores.append((doc, sum_prob)) # Use the summed probability directly as the score
# Normalize scores to be between 0 and 1
max_score = max(score for _, score in scores)
min_score = min(score for _, score in scores)
normalized_scores = [((score - min_score) / (max_score - min_score) if max_score > min_score else 0.5) for _, score in scores]
# Create a list of documents with normalized scores
ranked_documents = [(documents[idx], normalized_scores[idx]) for idx in range(len(documents))]
# Sort documents based on scores (descending order)
ranked_documents = sorted(ranked_documents, key=lambda x: x[1], reverse=True)
return ranked_documents
# Example query and documents
query = "پاکستان کی معیشت کی موجودہ صورتحال کیا ہے؟"
documents = [
"پاکستان کی معیشت میں بہتری کے اشارے ہیں۔",
"زر مبادلہ کے ذخائر میں کمی دیکھی گئی ہے۔",
"فٹبال پاکستان میں تیزی سے مقبول ہو رہا ہے۔"
]
# Get ranked documents
ranked_docs = rank_documents(query, documents)
# Print the ranked documents
for idx, (doc, score) in enumerate(ranked_docs):
print(f"Rank {idx + 1}: Score: {score}, Document: {doc}")
Rank 1: Score: 1.0, Document: پاکستان کی معیشت میں بہتری کے اشارے ہیں۔
Rank 2: Score: 0.547, Document: فٹبال پاکستان میں تیزی سے مقبول ہو رہا ہے۔
Rank 3: Score: 0.0, Document: زر مبادلہ کے ذخائر میں کمی دیکھی گئی ہے۔
```
## Model Card Authors [optional]
Umer Butt
## Model Card Contact
[email protected]
|
kiwi1229/cross_encoder_distil_roberta_base_1 | kiwi1229 | 2024-11-03T00:43:16Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-03T00:43:07Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Gemma-2-9b-baymax-GGUF | mradermacher | 2024-11-03T00:38:10Z | 26 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:lavita/ChatDoctor-HealthCareMagic-100k",
"base_model:samarth1029/Gemma-2-9b-baymax",
"base_model:quantized:samarth1029/Gemma-2-9b-baymax",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-02T17:25:25Z | ---
base_model: samarth1029/Gemma-2-9b-baymax
datasets:
- lavita/ChatDoctor-HealthCareMagic-100k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/samarth1029/Gemma-2-9b-baymax
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-2-9b-baymax-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-9b-baymax-GGUF/resolve/main/Gemma-2-9b-baymax.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xu-Ouyang/pythia-2.8b-deduped-int4-step2-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T00:33:06Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-03T00:32:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF | mradermacher | 2024-11-03T00:23:10Z | 53 | 1 | transformers | [
"transformers",
"gguf",
"Roleplay",
"custom_code",
"storytelling",
"en",
"base_model:aloobun/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1",
"base_model:quantized:aloobun/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-02T23:42:35Z | ---
base_model: aloobun/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- Roleplay
- custom_code
- storytelling
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/aloobun/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1-GGUF/resolve/main/Reyna-RP-Qwen1.5-0.5B-Chat-v0.1.Q8_0.gguf) | Q8_0 | 0.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NM-StarUnleashed-GGUF | mradermacher | 2024-11-03T00:22:11Z | 63 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/NM-StarUnleashed",
"base_model:quantized:mergekit-community/NM-StarUnleashed",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T17:33:54Z | ---
base_model: mergekit-community/NM-StarUnleashed
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/NM-StarUnleashed
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NM-StarUnleashed-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NM-StarUnleashed-GGUF/resolve/main/NM-StarUnleashed.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DanJoshua/profesor_Swin3D_N_VIOPERU | DanJoshua | 2024-11-03T00:12:02Z | 42 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T21:11:53Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: profesor_Swin3D_N_VIOPERU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# profesor_Swin3D_N_VIOPERU
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5378
- Accuracy: 0.7946
- F1: 0.7946
- Precision: 0.7947
- Recall: 0.7946
- Roc Auc: 0.8677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 84
- training_steps: 560
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 1.3382 | 2.0161 | 37 | 0.6576 | 0.7321 | 0.7279 | 0.7476 | 0.7321 | 0.7781 |
| 1.2814 | 5.0071 | 74 | 0.6126 | 0.7679 | 0.7678 | 0.7682 | 0.7679 | 0.8508 |
| 1.152 | 7.0232 | 111 | 0.5517 | 0.8036 | 0.8035 | 0.8040 | 0.8036 | 0.8546 |
| 0.8907 | 10.0143 | 148 | 0.5129 | 0.7679 | 0.7678 | 0.7682 | 0.7679 | 0.8673 |
| 0.7726 | 13.0054 | 185 | 0.4512 | 0.7857 | 0.7854 | 0.7872 | 0.7857 | 0.8827 |
| 0.6067 | 15.0214 | 222 | 0.4228 | 0.8036 | 0.8030 | 0.8071 | 0.8036 | 0.8929 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
Xu-Ouyang/pythia-2.8b-deduped-int3-step2-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-03T00:10:51Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-03T00:07:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felladrin/gguf-Q4_K_M-MiniCPM3-4B | Felladrin | 2024-11-02T23:59:31Z | 5 | 0 | null | [
"gguf",
"base_model:openbmb/MiniCPM3-4B-GGUF",
"base_model:quantized:openbmb/MiniCPM3-4B-GGUF",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-02T23:58:58Z | ---
base_model: openbmb/MiniCPM3-4B-GGUF
---
Sharded GGUF version of [openbmb/MiniCPM3-4B-GGUF](https://huggingface.co/openbmb/MiniCPM3-4B-GGUF).
|
Felladrin/gguf-Q5_K_L-AMD-OLMo-1B-SFT-DPO | Felladrin | 2024-11-02T23:45:28Z | 14 | 0 | null | [
"gguf",
"base_model:bartowski/AMD-OLMo-1B-SFT-DPO-GGUF",
"base_model:quantized:bartowski/AMD-OLMo-1B-SFT-DPO-GGUF",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T23:45:14Z | ---
base_model: bartowski/AMD-OLMo-1B-SFT-DPO-GGUF
---
Sharded GGUF version of [bartowski/AMD-OLMo-1B-SFT-DPO-GGUF](https://huggingface.co/bartowski/AMD-OLMo-1B-SFT-DPO-GGUF).
|
mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF | mradermacher | 2024-11-02T23:44:10Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Yukang/LongAlpaca-12k",
"base_model:Pretergeek/OpenChat-3.5-0106_32K-PoSE",
"base_model:quantized:Pretergeek/OpenChat-3.5-0106_32K-PoSE",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T21:31:26Z | ---
base_model: Pretergeek/OpenChat-3.5-0106_32K-PoSE
datasets:
- Yukang/LongAlpaca-12k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Pretergeek/OpenChat-3.5-0106_32K-PoSE
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenChat-3.5-0106_32K-PoSE-i1-GGUF/resolve/main/OpenChat-3.5-0106_32K-PoSE.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jlbaker361/ddpogan_512_cubism_40_0_20_increasing | jlbaker361 | 2024-11-02T23:22:27Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-30T05:08:46Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mav23/openchat-3.6-8b-20240522-GGUF | mav23 | 2024-11-02T23:17:53Z | 48 | 0 | transformers | [
"transformers",
"gguf",
"openchat",
"llama3",
"C-RLFT",
"text-generation",
"arxiv:2309.11235",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T22:15:55Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B
tags:
- openchat
- llama3
- C-RLFT
library_name: transformers
pipeline_tag: text-generation
---
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
<h1>Advancing Open-source Language Models with Mixed-Quality Data</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://openchat.team">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/imoneoi/openchat">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="https://arxiv.org/pdf/2309.11235.pdf">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/pQjnXvNKHY">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>
<p align="center" style="margin-top: 0px;">
<span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span>
<img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
</p>
<div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center">
<a href="https://huggingface.co/openchat/openchat-3.5-0106" style="text-decoration: none; color: black;">
<span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">Llama 3 Version: OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.6</span>
<span style="font-size: 1.0em; font-family: 'Helvetica'; color: white; background-color: #90e0ef; vertical-align: top; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">20240522</span>
<span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;">
<br> 🏆 The Overall Best Performing Open-source 8B Model 🏆
<br> 🚀 Outperforms Llama-3-8B-Instruct and open-source finetunes/merges 🚀
</span>
</a>
</div>
<div style="display: flex; justify-content: center; align-items: center; width: 110%; margin-left: -5%;">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/benchmarks-openchat-3.6-20240522.svg" style="width: 100%; border-radius: 1em">
</div>
<div style="display: flex; justify-content: center; align-items: center">
<p>* Llama-3-Instruct often fails to follow the few-shot templates. See <a href="https://huggingface.co/openchat/openchat-3.6-8b-20240522/discussions/6">example</a>.</p>
</div>
<div align="center">
<h2> Usage </h2>
</div>
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
| Model | Size | Context | Weights | Serving |
|-----------------------|------|---------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| OpenChat-3.6-20240522 | 8B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.6-8b-20240522) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.6-8b-20240522` |
<details>
<summary>Example request (click to expand)</summary>
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.6",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
</details>
### Conversation templates
💡 **Default Mode**: Best for coding, chat and general tasks.
It's a modified version of the Llama 3 Instruct template, the only difference is role names, which are either `GPT4 Correct User` or `GPT4 Correct Assistant`
```
<|start_header_id|>GPT4 Correct User<|end_header_id|>\n\nHello<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\nHi<|eot_id|><|start_header_id|>GPT4 Correct User<|end_header_id|>\n\nHow are you today?<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\n
```
⚠️ **Notice:** Remember to set `<|eot_id|>` as end of generation token.
The default template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template:
```python
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
```
## Inference using Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "openchat/openchat-3.6-8b-20240522"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "Explain how large language models work in detail."},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(input_ids,
do_sample=True,
temperature=0.5,
max_new_tokens=1024
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
<div align="center">
<h2> Limitations </h2>
</div>
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing from you and collaborating on this exciting project!
**Project Lead:**
- Guan Wang [imonenext at gmail dot com]
- [Alpay Ariyak](https://github.com/alpayariyak) [aariyak at wpi dot edu]
<div align="center">
<h2> Citation </h2>
</div>
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
``` |
mradermacher/Gemma-2-2b-baymax-i1-GGUF | mradermacher | 2024-11-02T23:14:15Z | 227 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:lavita/ChatDoctor-HealthCareMagic-100k",
"base_model:samarth1029/Gemma-2-2b-baymax",
"base_model:quantized:samarth1029/Gemma-2-2b-baymax",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T22:51:38Z | ---
base_model: samarth1029/Gemma-2-2b-baymax
datasets:
- lavita/ChatDoctor-HealthCareMagic-100k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/samarth1029/Gemma-2-2b-baymax
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma-2-2b-baymax-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ3_S.gguf) | i1-IQ3_S | 1.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ3_M.gguf) | i1-IQ3_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.7 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.7 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-2b-baymax-i1-GGUF/resolve/main/Gemma-2-2b-baymax.i1-Q6_K.gguf) | i1-Q6_K | 2.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
svr1945/my_awesome_eli5_clm-model | svr1945 | 2024-11-02T23:02:01Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-02T23:01:53Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilgpt2
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9547 | 1.0 | 1315 | 3.8307 |
| 3.8542 | 2.0 | 2630 | 3.8240 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.1
|
MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-bf16-GGUF | MaziyarPanahi | 2024-11-02T22:53:12Z | 47 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:mlx-community/Llama-3-Groq-8B-Tool-Use-bf16",
"base_model:quantized:mlx-community/Llama-3-Groq-8B-Tool-Use-bf16",
"region:us",
"conversational"
] | text-generation | 2024-11-02T22:32:44Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3-Groq-8B-Tool-Use-bf16-GGUF
base_model: mlx-community/Llama-3-Groq-8B-Tool-Use-bf16
inference: false
model_creator: mlx-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-bf16-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-bf16-GGUF)
- Model creator: [mlx-community](https://huggingface.co/mlx-community)
- Original model: [mlx-community/Llama-3-Groq-8B-Tool-Use-bf16](https://huggingface.co/mlx-community/Llama-3-Groq-8B-Tool-Use-bf16)
## Description
[MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-bf16-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-bf16-GGUF) contains GGUF format model files for [mlx-community/Llama-3-Groq-8B-Tool-Use-bf16](https://huggingface.co/mlx-community/Llama-3-Groq-8B-Tool-Use-bf16).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Xu-Ouyang/pythia-6.9b-deduped-int4-step1000-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-02T22:48:56Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-02T22:47:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeyandfriends/PixelWave_FLUX.1-schnell_03 | mikeyandfriends | 2024-11-02T22:43:52Z | 386 | 7 | null | [
"gguf",
"license:other",
"region:us"
] | null | 2024-10-28T08:46:02Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# PixelWave Flux.1-schnell 03
Suprise! Schnell version of the PixelWave Flux 03 model.
You can expect some differences between the dev and schnell version.
A general-purpose FLUX model, great for art and photo styles.
**Standard Use:**
- Euler, Simple
- DPM++ 2M, SGM Uniform
- At least 4 steps to remove noise, but will probably need 6 to 8 steps to reduce errors with limbs, etc.
As training differences were extracted from dev and applied to schnell, this could be interpreted as a derivative of the dev model and as such inherit the license from dev.
 |
glif-loradex-trainer/i12_appelsiensam_blendedreality_v1 | glif-loradex-trainer | 2024-11-02T22:40:26Z | 11 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-02T22:39:50Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730587124832__000003000_0.jpg
text: portrait of a worn man, NLNDDRLT_PPLSNSM
- output:
url: samples/1730587148404__000003000_1.jpg
text: sloth driving a car, NLNDDRLT_PPLSNSM
- output:
url: samples/1730587172370__000003000_2.jpg
text: a cosmic well, military satellite tech, NLNDDRLT_PPLSNSM
base_model: black-forest-labs/FLUX.1-dev
trigger: NLNDDRLT_PPLSNSM
instance_prompt: NLNDDRLT_PPLSNSM
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# appelsiensam_blendedreality_v1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `i12`.
<Gallery />
## Trigger words
You should use `NLNDDRLT_PPLSNSM` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/i12_appelsiensam_blendedreality_v1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF | mradermacher | 2024-11-02T22:35:14Z | 245 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"paulml/OmniBeagleMBX-v3-7B",
"flemmingmiguel/MBX-7B-v3",
"en",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:quantized:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-02T17:59:27Z | ---
base_model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- paulml/OmniBeagleMBX-v3-7B
- flemmingmiguel/MBX-7B-v3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
unstpb-nlp/llama-2-13b-ft-CompLex-2021 | unstpb-nlp | 2024-11-02T22:31:00Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-04T16:08:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/CyberWitch-7B-GGUF | mradermacher | 2024-11-02T22:30:49Z | 109 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardMath-7B-V1.1",
"segolilylabs/Lily-Cybersecurity-7B-v0.2",
"en",
"base_model:LunaticPython161/CyberWitch-7B",
"base_model:quantized:LunaticPython161/CyberWitch-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T06:29:55Z | ---
base_model: LunaticPython161/CyberWitch-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardMath-7B-V1.1
- segolilylabs/Lily-Cybersecurity-7B-v0.2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LunaticPython161/CyberWitch-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CyberWitch-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CyberWitch-7B-GGUF/resolve/main/CyberWitch-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
exala/db_aca2_4.10.1 | exala | 2024-11-02T22:17:26Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T22:17:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF | featherless-ai-quants | 2024-11-02T22:17:14Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:Eric111/SOLAR-10.7B-Instruct-v1.0-DPO",
"base_model:quantized:Eric111/SOLAR-10.7B-Instruct-v1.0-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T22:02:28Z | ---
base_model: Eric111/SOLAR-10.7B-Instruct-v1.0-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Eric111/SOLAR-10.7B-Instruct-v1.0-DPO GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q8_0.gguf) | 10875.85 MB |
| Q4_K_S | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q4_K_S.gguf) | 5835.08 MB |
| Q2_K | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q2_K.gguf) | 3817.78 MB |
| Q6_K | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K.gguf) | 8397.30 MB |
| Q3_K_M | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q3_K_M.gguf) | 4954.98 MB |
| Q3_K_S | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q3_K_S.gguf) | 4448.48 MB |
| Q3_K_L | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q3_K_L.gguf) | 5388.98 MB |
| Q4_K_M | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q4_K_M.gguf) | 6162.33 MB |
| Q5_K_S | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q5_K_S.gguf) | 7054.70 MB |
| Q5_K_M | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-Q5_K_M.gguf) | 7245.95 MB |
| IQ4_XS | [Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-GGUF/blob/main/Eric111-SOLAR-10.7B-Instruct-v1.0-DPO-IQ4_XS.gguf) | 5557.67 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
gavinqiangli/mpnet-base-all-nli-triplet | gavinqiangli | 2024-11-02T22:15:29Z | 10 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-large-en",
"base_model:finetune:BAAI/bge-large-en",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-02T22:14:29Z | ---
base_model: BAAI/bge-large-en
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: A construction worker is standing on a crane placing a large arm
on top of a stature in progress.
sentences:
- A man is playing with his camera.
- A person standing
- Nobody is standing
- source_sentence: A boy in red slides down an inflatable ride.
sentences:
- a baby smiling
- A boy is playing on an inflatable ride.
- A boy pierces a knife through an inflatable ride.
- source_sentence: A man in a black shirt is playing a guitar.
sentences:
- A group of women are selling their wares
- The man is wearing black.
- The man is wearing a blue shirt.
- source_sentence: A man with a large power drill standing next to his daughter with
a vacuum cleaner hose.
sentences:
- A man holding a drill stands next to a girl holding a vacuum hose.
- Kids ride an amusement ride.
- The man and girl are painting the walls.
- source_sentence: A middle-aged man works under the engine of a train on rail tracks.
sentences:
- A guy is working on a train.
- Two young asian men are squatting.
- A guy is driving to work.
model-index:
- name: SentenceTransformer based on BAAI/bge-large-en
results:
- task:
type: triplet
name: Triplet
dataset:
name: all nli test
type: all-nli-test
metrics:
- type: cosine_accuracy
value: 0.8853079134513542
name: Cosine Accuracy
- type: dot_accuracy
value: 0.11469208654864578
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.885761839915267
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.8853079134513542
name: Euclidean Accuracy
- type: max_accuracy
value: 0.885761839915267
name: Max Accuracy
---
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gavinqiangli/mpnet-base-all-nli-triplet")
# Run inference
sentences = [
'A middle-aged man works under the engine of a train on rail tracks.',
'A guy is working on a train.',
'A guy is driving to work.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-nli-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.8853 |
| dot_accuracy | 0.1147 |
| manhattan_accuracy | 0.8858 |
| euclidean_accuracy | 0.8853 |
| **max_accuracy** | **0.8858** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | all-nli-test_max_accuracy |
|:------:|:----:|:-------------:|:---------------:|:-------------------------:|
| 0.3195 | 100 | 0.6642 | 0.6399 | - |
| 0.6390 | 200 | 0.1421 | 0.8174 | - |
| 0.9585 | 300 | 0.3632 | 1.0566 | - |
| 1.0 | 313 | - | - | 0.8858 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MaziyarPanahi/llama-3-youko-8b-instruct-GGUF | MaziyarPanahi | 2024-11-02T22:13:24Z | 38 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:rinna/llama-3-youko-8b-instruct",
"base_model:quantized:rinna/llama-3-youko-8b-instruct",
"region:us",
"conversational"
] | text-generation | 2024-11-02T21:52:40Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: llama-3-youko-8b-instruct-GGUF
base_model: rinna/llama-3-youko-8b-instruct
inference: false
model_creator: rinna
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/llama-3-youko-8b-instruct-GGUF](https://huggingface.co/MaziyarPanahi/llama-3-youko-8b-instruct-GGUF)
- Model creator: [rinna](https://huggingface.co/rinna)
- Original model: [rinna/llama-3-youko-8b-instruct](https://huggingface.co/rinna/llama-3-youko-8b-instruct)
## Description
[MaziyarPanahi/llama-3-youko-8b-instruct-GGUF](https://huggingface.co/MaziyarPanahi/llama-3-youko-8b-instruct-GGUF) contains GGUF format model files for [rinna/llama-3-youko-8b-instruct](https://huggingface.co/rinna/llama-3-youko-8b-instruct).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
winwithpartner/immi_llama_1 | winwithpartner | 2024-11-02T22:04:19Z | 85 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-02T22:00:27Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bif02/classifier_llama_86 | bif02 | 2024-11-02T22:00:34Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:bif02/classifier_llama_86",
"base_model:finetune:bif02/classifier_llama_86",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T11:50:41Z | ---
library_name: transformers
license: llama3.1
base_model: bif02/classifier_llama_86
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: classifier_llama_86
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier_llama_86
This model is a fine-tuned version of [bif02/classifier_llama_86](https://huggingface.co/bif02/classifier_llama_86) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6942
- Accuracy: 0.4392
- Precision: 0.4392
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| No log | 1.0 | 358 | 0.6954 | 0.4392 | 0.4392 | 1.0 |
| 0.6953 | 2.0 | 716 | 0.6960 | 0.4392 | 0.4392 | 1.0 |
| 0.6945 | 3.0 | 1074 | 0.6913 | 0.5608 | 0.0 | 0.0 |
| 0.6945 | 4.0 | 1432 | 0.6939 | 0.4392 | 0.4392 | 1.0 |
| 0.6938 | 5.0 | 1790 | 0.6924 | 0.5608 | 0.0 | 0.0 |
| 0.6938 | 6.0 | 2148 | 0.6934 | 0.4392 | 0.4392 | 1.0 |
| 0.6936 | 7.0 | 2506 | 0.6932 | 0.4392 | 0.4392 | 1.0 |
| 0.6936 | 8.0 | 2864 | 0.6942 | 0.4392 | 0.4392 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
sshetty/task-13-microsoft-Phi-3-mini-4k-instruct | sshetty | 2024-11-02T21:57:00Z | 5 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"custom_code",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2024-10-07T16:06:52Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF | mradermacher | 2024-11-02T21:31:37Z | 147 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"paulml/OmniBeagleMBX-v3-7B",
"flemmingmiguel/MBX-7B-v3",
"en",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:quantized:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T10:20:12Z | ---
base_model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- paulml/OmniBeagleMBX-v3-7B
- flemmingmiguel/MBX-7B-v3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OmniBeagleSquaredMBX-v3-7B-v2-GGUF/resolve/main/OmniBeagleSquaredMBX-v3-7B-v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ONS-SOLAR-10.7B-GGUF | mradermacher | 2024-11-02T21:31:04Z | 52 | 0 | transformers | [
"transformers",
"gguf",
"SOLAR-10.7B",
"ko",
"base_model:ONS-AI-RESEARCH/ONS-SOLAR-10.7B",
"base_model:quantized:ONS-AI-RESEARCH/ONS-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T02:00:17Z | ---
base_model: ONS-AI-RESEARCH/ONS-SOLAR-10.7B
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- SOLAR-10.7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ONS-AI-RESEARCH/ONS-SOLAR-10.7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.IQ4_XS.gguf) | IQ4_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q6_K.gguf) | Q6_K | 9.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ONS-SOLAR-10.7B-GGUF/resolve/main/ONS-SOLAR-10.7B.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/Llama-Guard-3-8B-GGUF | MaziyarPanahi | 2024-11-02T21:29:30Z | 44 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:meta-llama/Llama-Guard-3-8B",
"base_model:quantized:meta-llama/Llama-Guard-3-8B",
"region:us",
"conversational"
] | text-generation | 2024-11-02T21:08:07Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-Guard-3-8B-GGUF
base_model: meta-llama/Llama-Guard-3-8B
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-Guard-3-8B-GGUF](https://huggingface.co/MaziyarPanahi/Llama-Guard-3-8B-GGUF)
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [meta-llama/Llama-Guard-3-8B](https://huggingface.co/meta-llama/Llama-Guard-3-8B)
## Description
[MaziyarPanahi/Llama-Guard-3-8B-GGUF](https://huggingface.co/MaziyarPanahi/Llama-Guard-3-8B-GGUF) contains GGUF format model files for [meta-llama/Llama-Guard-3-8B](https://huggingface.co/meta-llama/Llama-Guard-3-8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF | mradermacher | 2024-11-02T21:19:07Z | 115 | 0 | transformers | [
"transformers",
"gguf",
"Retrieval Augmented Generation",
"RAG",
"Multi-domain",
"ko",
"dataset:HumanF-MarkrAI/Korean-RAG-ver2",
"base_model:MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0",
"base_model:quantized:MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T05:51:12Z | ---
base_model: MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0
datasets:
- HumanF-MarkrAI/Korean-RAG-ver2
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- Retrieval Augmented Generation
- RAG
- Multi-domain
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RAG-KO-Mixtral-7Bx2-v2.0-GGUF/resolve/main/RAG-KO-Mixtral-7Bx2-v2.0.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF | mradermacher | 2024-11-02T21:18:08Z | 411 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"base_model:megastudyedu/M-SOLAR-10.7B-v1.4-dpo",
"base_model:quantized:megastudyedu/M-SOLAR-10.7B-v1.4-dpo",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T19:33:16Z | ---
base_model: megastudyedu/M-SOLAR-10.7B-v1.4-dpo
language:
- ko
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/megastudyedu/M-SOLAR-10.7B-v1.4-dpo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/M-SOLAR-10.7B-v1.4-dpo-i1-GGUF/resolve/main/M-SOLAR-10.7B-v1.4-dpo.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DanJoshua/profesor_MViT_S_VIOPERU | DanJoshua | 2024-11-02T21:09:53Z | 34 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-11-02T00:03:29Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: profesor_MViT_S_VIOPERU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# profesor_MViT_S_VIOPERU
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2991
- Accuracy: 0.9107
- F1: 0.9107
- Precision: 0.9112
- Recall: 0.9107
- Roc Auc: 0.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 23
- eval_batch_size: 23
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 81
- training_steps: 810
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 0.6473 | 6.0111 | 81 | 0.6350 | 0.75 | 0.7497 | 0.7513 | 0.75 | 0.8418 |
| 0.5438 | 13.0074 | 162 | 0.5559 | 0.7857 | 0.7857 | 0.7857 | 0.7857 | 0.8980 |
| 0.4124 | 20.0037 | 243 | 0.4445 | 0.8571 | 0.8564 | 0.8646 | 0.8571 | 0.9439 |
| 0.2958 | 26.0148 | 324 | 0.3501 | 0.8929 | 0.8927 | 0.8949 | 0.8929 | 0.9745 |
| 0.2126 | 33.0111 | 405 | 0.2827 | 0.8929 | 0.8927 | 0.8949 | 0.8929 | 0.9745 |
| 0.1469 | 40.0074 | 486 | 0.3615 | 0.875 | 0.8746 | 0.8794 | 0.875 | 0.9732 |
| 0.1063 | 47.0037 | 567 | 0.3208 | 0.8929 | 0.8927 | 0.8949 | 0.8929 | 0.9783 |
| 0.0883 | 53.0148 | 648 | 0.4270 | 0.875 | 0.8746 | 0.8794 | 0.875 | 0.9745 |
| 0.0631 | 60.0111 | 729 | 0.4191 | 0.8929 | 0.8927 | 0.8949 | 0.8929 | 0.9783 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF | featherless-ai-quants | 2024-11-02T21:09:51Z | 24 | 0 | null | [
"gguf",
"text-generation",
"base_model:saishf/SOVL-Mega-Mash-L3-8B",
"base_model:quantized:saishf/SOVL-Mega-Mash-L3-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T20:54:03Z | ---
base_model: saishf/SOVL-Mega-Mash-L3-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# saishf/SOVL-Mega-Mash-L3-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [saishf-SOVL-Mega-Mash-L3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [saishf-SOVL-Mega-Mash-L3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [saishf-SOVL-Mega-Mash-L3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [saishf-SOVL-Mega-Mash-L3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [saishf-SOVL-Mega-Mash-L3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [saishf-SOVL-Mega-Mash-L3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [saishf-SOVL-Mega-Mash-L3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [saishf-SOVL-Mega-Mash-L3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [saishf-SOVL-Mega-Mash-L3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [saishf-SOVL-Mega-Mash-L3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [saishf-SOVL-Mega-Mash-L3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/saishf-SOVL-Mega-Mash-L3-8B-GGUF/blob/main/saishf-SOVL-Mega-Mash-L3-8B-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/canarim-7b-vestibulaide-i1-GGUF | mradermacher | 2024-11-02T21:06:16Z | 69 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"pytorch",
"LLM",
"Portuguese",
"Llama 2",
"pt",
"base_model:dominguesm/canarim-7b-vestibulaide",
"base_model:quantized:dominguesm/canarim-7b-vestibulaide",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-11-02T20:02:53Z | ---
base_model: dominguesm/canarim-7b-vestibulaide
language:
- pt
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- text-generation
- pytorch
- LLM
- Portuguese
- Llama 2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/dominguesm/canarim-7b-vestibulaide
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/canarim-7b-vestibulaide-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/canarim-7b-vestibulaide-i1-GGUF/resolve/main/canarim-7b-vestibulaide.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF | featherless-ai-quants | 2024-11-02T21:03:46Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:gradientai/Llama-3-8B-Instruct-262k",
"base_model:quantized:gradientai/Llama-3-8B-Instruct-262k",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T20:30:00Z | ---
base_model: gradientai/Llama-3-8B-Instruct-262k
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# gradientai/Llama-3-8B-Instruct-262k GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [gradientai-Llama-3-8B-Instruct-262k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [gradientai-Llama-3-8B-Instruct-262k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [gradientai-Llama-3-8B-Instruct-262k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [gradientai-Llama-3-8B-Instruct-262k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [gradientai-Llama-3-8B-Instruct-262k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [gradientai-Llama-3-8B-Instruct-262k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [gradientai-Llama-3-8B-Instruct-262k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [gradientai-Llama-3-8B-Instruct-262k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [gradientai-Llama-3-8B-Instruct-262k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [gradientai-Llama-3-8B-Instruct-262k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [gradientai-Llama-3-8B-Instruct-262k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/gradientai-Llama-3-8B-Instruct-262k-GGUF/blob/main/gradientai-Llama-3-8B-Instruct-262k-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF | featherless-ai-quants | 2024-11-02T20:54:29Z | 10 | 0 | null | [
"gguf",
"text-generation",
"base_model:Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle",
"base_model:quantized:Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T20:22:25Z | ---
base_model: Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-GGUF/blob/main/Dampfinchen-Llama-3-8B-Ultra-Instruct-SaltSprinkle-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
dhruvyadav89300/BERT-phishing-classifier | dhruvyadav89300 | 2024-11-02T20:52:57Z | 160 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:shawhin/phishing-site-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T20:24:25Z | ---
library_name: transformers
datasets:
- shawhin/phishing-site-classification
base_model:
- google-bert/bert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [phishing-site-classification dataset](https://huggingface.co/datasets/shawhin/phishing-site-classification)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [GitHub](https://github.com/dhruvyadav89300/BERT-Phishing-Classifier)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Training Results
| Epoch | Training Loss | Step | Validation Loss | Accuracy | AUC | Learning Rate |
|-------|---------------|------|-----------------|----------|------|---------------|
| 1 | 0.4932 | 263 | 0.4237 | 0.789 | 0.912| 0.00019 |
| 2 | 0.3908 | 526 | 0.3761 | 0.824 | 0.932| 0.00018 |
| 3 | 0.3787 | 789 | 0.3136 | 0.860 | 0.941| 0.00017 |
| 4 | 0.3606 | 1052 | 0.4401 | 0.818 | 0.944| 0.00016 |
| 5 | 0.3545 | 1315 | 0.2928 | 0.864 | 0.947| 0.00015 |
| 6 | 0.3600 | 1578 | 0.3406 | 0.867 | 0.949| 0.00014 |
| 7 | 0.3233 | 1841 | 0.2897 | 0.869 | 0.950| 0.00013 |
| 8 | 0.3411 | 2104 | 0.3328 | 0.871 | 0.949| 0.00012 |
| 9 | 0.3292 | 2367 | 0.3189 | 0.876 | 0.954| 0.00011 |
| 10 | 0.3239 | 2630 | 0.3685 | 0.849 | 0.956| 0.00010 |
| 11 | 0.3201 | 2893 | 0.3317 | 0.862 | 0.956| 0.00009 |
| 12 | 0.3335 | 3156 | 0.2725 | 0.869 | 0.957| 0.00008 |
| 13 | 0.3230 | 3419 | 0.2856 | 0.882 | 0.955| 0.00007 |
| 14 | 0.3087 | 3682 | 0.2900 | 0.882 | 0.957| 0.00006 |
| 15 | 0.3050 | 3945 | 0.2704 | 0.893 | 0.957| 0.00005 |
| 16 | 0.3032 | 4208 | 0.2662 | 0.878 | 0.957| 0.00004 |
| 17 | 0.3027 | 4471 | 0.2930 | 0.882 | 0.956| 0.00003 |
| 18 | 0.2950 | 4734 | 0.2707 | 0.880 | 0.957| 0.00002 |
| 19 | 0.2998 | 4997 | 0.2782 | 0.884 | 0.957| 0.00001 |
| 20 | 0.2971 | 5260 | 0.2792 | 0.882 | 0.957| 0.00000 |
#### Final Training Summary
- **Total Training Runtime:** 555.4381 seconds
- **Final Training Loss:** 0.3372
- **Train Samples per Second:** 75.616
- **Eval Accuracy (Best Epoch):** 0.893 (Epoch 15)
- **Eval AUC (Best Epoch):** 0.957 (Multiple Epochs)
|
zixianma/mma_mantis_mix_586k_bsline-seq_len_8192-lr_1e-5-gl_bs_128-ep_1 | zixianma | 2024-11-02T20:46:13Z | 8 | 0 | null | [
"safetensors",
"llava",
"generated_from_trainer",
"base_model:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"base_model:finetune:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind",
"license:llama3",
"region:us"
] | null | 2024-11-02T05:50:40Z | ---
license: llama3
base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind
tags:
- generated_from_trainer
model-index:
- name: mma_mantis_mix_586k_bsline-seq_len_8192-lr_1e-5-gl_bs_128-ep_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://salesforceairesearch.wandb.io/jianguozhang/Mantis/runs/6rk46h2z)
# mma_mantis_mix_586k_bsline-seq_len_8192-lr_1e-5-gl_bs_128-ep_1
This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.43.0
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
griffio/vit-base-patch16-224-in21k-rotated-dungeons-v12 | griffio | 2024-11-02T20:37:56Z | 190 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-02T20:36:44Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-rotated-dungeons-v12
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: rotated_maps
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6666666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-rotated-dungeons-v12
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rotated_maps dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4316
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 27
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.0811 | 8.3333 | 100 | 1.6611 | 0.4167 |
| 0.6509 | 16.6667 | 200 | 1.4987 | 0.5 |
| 0.5656 | 25.0 | 300 | 1.4316 | 0.6667 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
PMJAi/bert-base-multilingual-cased-sl_50-reranker | PMJAi | 2024-11-02T20:34:10Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T20:33:44Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
async0x42/Rombos-LLM-V2.5-Qwen-72b-exl2_3.25bpw | async0x42 | 2024-11-02T20:31:26Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-72B-Instruct",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-11-02T20:18:05Z | ---
license: other
library_name: transformers
base_model:
- Qwen/Qwen2.5-72B-Instruct
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
model-index:
- name: Replete-LLM-V2.5-Qwen-72b_Duplicated
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.55
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 61.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 47.58
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.8
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.32
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.83
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rombodawg/Replete-LLM-V2.5-Qwen-72b_Duplicated
name: Open LLM Leaderboard
---
# Rombos-LLM-V2.5-Qwen-72b

Rombos-LLM-V2.5-Qwen-72b is a continues finetuned version of Qwen2.5-72B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method
This version of the model shows higher performance than the original instruct and base models.
Quants: (Coming soon)
GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-72b-GGUF
EXL2:
Benchmarks: (Coming soon)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rombodawg__Replete-LLM-V2.5-Qwen-72b_Duplicated)
| Metric |Value|
|-------------------|----:|
|Avg. |45.39|
|IFEval (0-Shot) |71.55|
|BBH (3-Shot) |61.27|
|MATH Lvl 5 (4-Shot)|47.58|
|GPQA (0-shot) |19.80|
|MuSR (0-shot) |17.32|
|MMLU-PRO (5-shot) |54.83|
|
rzv09/task-13-google-gemma-2-2b | rzv09 | 2024-11-02T20:28:58Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b",
"base_model:adapter:google/gemma-2-2b",
"region:us"
] | null | 2024-10-23T18:21:45Z | ---
base_model: google/gemma-2-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF | featherless-ai-quants | 2024-11-02T20:26:40Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:FlagAlpha/Llama3-Chinese-8B-Instruct",
"base_model:quantized:FlagAlpha/Llama3-Chinese-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T20:15:04Z | ---
base_model: FlagAlpha/Llama3-Chinese-8B-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FlagAlpha/Llama3-Chinese-8B-Instruct GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [FlagAlpha-Llama3-Chinese-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [FlagAlpha-Llama3-Chinese-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FlagAlpha-Llama3-Chinese-8B-Instruct-GGUF/blob/main/FlagAlpha-Llama3-Chinese-8B-Instruct-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
allknowingroger/Qwen2.5-7B-task6 | allknowingroger | 2024-11-02T20:23:19Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:fblgit/cybertron-v4-qw7B-MGS",
"base_model:merge:fblgit/cybertron-v4-qw7B-MGS",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-02T20:19:09Z | ---
base_model:
- fblgit/cybertron-v4-qw7B-MGS
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
- Qwen/Qwen2.5-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS)
* [Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: fblgit/cybertron-v4-qw7B-MGS
parameters:
weight: 1.0
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
parameters:
weight: 1.0
merge_method: task_arithmetic
base_model: Qwen/Qwen2.5-7B
parameters:
normalize: true
dtype: bfloat16
``` |
griffio/vit-base-patch16-224-in21k-rotated-dungeons-v10 | griffio | 2024-11-02T20:18:04Z | 261 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-02T20:17:31Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-rotated-dungeons-v10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: rotated_maps
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-rotated-dungeons-v10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rotated_maps dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9495
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.3924 | 8.3333 | 100 | 1.0648 | 0.75 |
| 0.2925 | 16.6667 | 200 | 0.9745 | 0.875 |
| 0.2696 | 25.0 | 300 | 0.9495 | 0.875 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
laishram/FineLlama-3.2-1B-GGUF | laishram | 2024-11-02T20:16:40Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-19T18:53:43Z | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** laishram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF | featherless-ai-quants | 2024-11-02T20:10:50Z | 18 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/llama3-KawaiiMahouSauce-8B",
"base_model:quantized:nbeerbower/llama3-KawaiiMahouSauce-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-02T19:41:15Z | ---
base_model: nbeerbower/llama3-KawaiiMahouSauce-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/llama3-KawaiiMahouSauce-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [nbeerbower-llama3-KawaiiMahouSauce-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [nbeerbower-llama3-KawaiiMahouSauce-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3-KawaiiMahouSauce-8B-GGUF/blob/main/nbeerbower-llama3-KawaiiMahouSauce-8B-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
ChiragB254/health-care-gemma2-kagglex | ChiragB254 | 2024-11-02T19:56:49Z | 8 | 1 | null | [
"text-generation",
"en",
"dataset:Mohammed-Altaf/medical-instruction-100k",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"region:us"
] | text-generation | 2024-10-30T03:25:04Z | ---
language:
- en
base_model:
- google/gemma-2-2b-it
datasets:
- Mohammed-Altaf/medical-instruction-100k
pipeline_tag: text-generation
--- |
pppereira3/distilbert-rating-regression-rob-dset | pppereira3 | 2024-11-02T19:52:17Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T19:25:11Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-rating-regression-rob-dset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rating-regression-rob-dset
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1938
- Accuracy: 0.5579
- Mse: 0.6630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Mse |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 425 | 1.0287 | 0.5356 | 0.7750 |
| 1.1536 | 2.0 | 850 | 1.0397 | 0.5328 | 0.6829 |
| 0.8897 | 3.0 | 1275 | 1.0515 | 0.5565 | 0.6503 |
| 0.6831 | 4.0 | 1700 | 1.1938 | 0.5579 | 0.6630 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
hatemestinbejaia/mmarco-Arabic-AraElectra-cross-encoder-KD-v1 | hatemestinbejaia | 2024-11-02T19:51:03Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-02T19:50:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xu-Ouyang/pythia-6.9b-deduped-int4-step256-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-02T19:46:27Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-02T19:37:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF | mradermacher | 2024-11-02T19:40:09Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"llm-agent",
"en",
"dataset:xingyaoww/code-act",
"base_model:xingyaoww/CodeActAgent-Llama-2-7b",
"base_model:quantized:xingyaoww/CodeActAgent-Llama-2-7b",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-02T17:00:44Z | ---
base_model: xingyaoww/CodeActAgent-Llama-2-7b
datasets:
- xingyaoww/code-act
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- llm-agent
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeActAgent-Llama-2-7b-i1-GGUF/resolve/main/CodeActAgent-Llama-2-7b.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits