mradermacher's picture
auto-patch README.md
030ec84 verified
metadata
base_model: pints-ai/1.5-Pints-16K-v0.1
datasets:
  - pints-ai/Expository-Prose-V1
  - HuggingFaceH4/ultrachat_200k
  - Open-Orca/SlimOrca-Dedup
  - meta-math/MetaMathQA
  - HuggingFaceH4/deita-10k-v0-sft
  - WizardLM/WizardLM_evol_instruct_V2_196k
  - togethercomputer/llama-instruct
  - LDJnr/Capybara
  - HuggingFaceH4/ultrafeedback_binarized
extra_gated_fields:
  Company: text
  Country: country
  I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
  I want to use this model for:
    options:
      - Research
      - Education
      - label: Other
        value: other
    type: select
  Specific date: date_picker
extra_gated_prompt: >-
  Though best efforts has been made to ensure, as much as possible, that all
  texts in the training corpora are royalty free, this does not constitute a
  legal guarantee that such is the case. **By using any of the models, corpora
  or part thereof, the user agrees to bear full responsibility to do the
  necessary due diligence to ensure that he / she is in compliance with their
  local copyright laws. Additionally, the user agrees to bear any damages
  arising as a direct cause (or otherwise) of using any artifacts released by
  the pints research team, as well as full responsibility for the consequences
  of his / her usage (or implementation) of any such released artifacts. The
  user also indemnifies Pints Research Team (and any of its members or agents)
  of any damage, related or unrelated, to the release or subsequent usage of any
  findings, artifacts or code by the team. For the avoidance of doubt, any
  artifacts released by the Pints Research team are done so in accordance with
  the 'fair use' clause of Copyright Law, in hopes that this will aid the
  research community in bringing LLMs to the next frontier.
language:
  - en
library_name: transformers
license: mit
quantized_by: mradermacher

About

static quants of https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1

weighted/imatrix quants are available at https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 0.7
GGUF Q3_K_S 0.8
GGUF Q3_K_M 0.9 lower quality
GGUF Q3_K_L 0.9
GGUF IQ4_XS 1.0
GGUF Q4_K_S 1.0 fast, recommended
GGUF Q4_K_M 1.1 fast, recommended
GGUF Q5_K_S 1.2
GGUF Q5_K_M 1.2
GGUF Q6_K 1.4 very good quality
GGUF Q8_0 1.8 fast, best quality
GGUF f16 3.2 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.