File size: 5,220 Bytes
030ec84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
328d6bd
 
 
 
 
 
030ec84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
base_model: pints-ai/1.5-Pints-16K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
extra_gated_fields:
  Company: text
  Country: country
  I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
  I want to use this model for:
    options:
    - Research
    - Education
    - label: Other
      value: other
    type: select
  Specific date: date_picker
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
  that all texts in the training corpora are royalty free, this does not constitute
  a legal guarantee that such is the case. **By using any of the models, corpora or
  part thereof, the user agrees to bear full responsibility to do the necessary due
  diligence to ensure that he / she is in compliance with their local copyright laws.
  Additionally, the user agrees to bear any damages arising as a direct cause (or
  otherwise) of using any artifacts released by the pints research team, as well as
  full responsibility for the consequences of his / her usage (or implementation)
  of any such released artifacts. The user also indemnifies Pints Research Team (and
  any of its members or agents) of any damage, related or unrelated, to the release
  or subsequent usage of any findings, artifacts or code by the team. For the avoidance
  of doubt, any artifacts released by the Pints Research team are done so in accordance
  with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
  community in bringing LLMs to the next frontier.
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About

<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type:  -->
<!-- ### tags:  -->
static quants of https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1

<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF
## Usage

If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.

## Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q2_K.gguf) | Q2_K | 0.7 |  |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.8 |  |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.9 |  |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.0 |  |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.2 |  |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.2 |  |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

## FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

## Thanks

I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

<!-- end -->