modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MaziyarPanahi/Magic_8B-GGUF | MaziyarPanahi | 2024-11-01T03:47:21Z | 71 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:FourOhFour/Magic_8B",
"base_model:quantized:FourOhFour/Magic_8B",
"region:us",
"conversational"
] | text-generation | 2024-11-01T03:04:33Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Magic_8B-GGUF
base_model: FourOhFour/Magic_8B
inference: false
model_creator: FourOhFour
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Magic_8B-GGUF](https://huggingface.co/MaziyarPanahi/Magic_8B-GGUF)
- Model creator: [FourOhFour](https://huggingface.co/FourOhFour)
- Original model: [FourOhFour/Magic_8B](https://huggingface.co/FourOhFour/Magic_8B)
## Description
[MaziyarPanahi/Magic_8B-GGUF](https://huggingface.co/MaziyarPanahi/Magic_8B-GGUF) contains GGUF format model files for [FourOhFour/Magic_8B](https://huggingface.co/FourOhFour/Magic_8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF | mradermacher | 2024-11-01T03:43:28Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:RJuro/munin-neuralbeagle-SkoleGPTOpenOrca-7b",
"base_model:quantized:RJuro/munin-neuralbeagle-SkoleGPTOpenOrca-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T03:29:35Z | ---
base_model: RJuro/munin-neuralbeagle-SkoleGPTOpenOrca-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RJuro/munin-neuralbeagle-SkoleGPTOpenOrca-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/munin-neuralbeagle-SkoleGPTOpenOrca-7b-GGUF/resolve/main/munin-neuralbeagle-SkoleGPTOpenOrca-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
alifares330/oxford-pet-segmentation-exp | alifares330 | 2024-11-01T03:41:18Z | 8 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2024-11-01T03:41:06Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet34",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.907154381275177,
"test_dataset_iou": 0.9143515825271606
}
]
```
## Dataset
Dataset name: Oxford Pet
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
mradermacher/flux-7b-v0.3-GGUF | mradermacher | 2024-11-01T03:22:44Z | 22 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:chanwit/flux-7b-v0.3",
"base_model:quantized:chanwit/flux-7b-v0.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T02:54:58Z | ---
base_model: chanwit/flux-7b-v0.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chanwit/flux-7b-v0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/flux-7b-v0.3-GGUF/resolve/main/flux-7b-v0.3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kristiannordby/t5-sql | kristiannordby | 2024-11-01T03:14:54Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-01T03:13:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HoneyBadger2989/Llama-3.1-Storm-8B-GGUF | HoneyBadger2989 | 2024-11-01T03:13:17Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.1",
"conversational",
"instruction following",
"reasoning",
"function calling",
"mergekit",
"finetuning",
"axolotl",
"autoquant",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2406.06623",
"arxiv:2311.07911",
"arxiv:2311.12022",
"arxiv:2406.01574",
"arxiv:1803.05457",
"arxiv:2310.16049",
"arxiv:2210.09261",
"arxiv:2109.07958",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T01:42:21Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- llama-3.1
- conversational
- instruction following
- reasoning
- function calling
- mergekit
- finetuning
- axolotl
- autoquant
- gguf
---

Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
**🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
**🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
## TL;DR

We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
## 🏆 Introducing Llama-3.1-Storm-8B
[**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
## Llama-3.1-Storm-8B Model Strengths
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
<table>
<tr>
<td><strong>Model Strength</strong>
</td>
<td><strong>Relevant Benchmarks</strong>
</td>
<tr>
<tr>
<td>🎯 Improved Instruction Following
</td>
<td>IFEval Strict (+3.93%)
</td>
<tr>
<tr>
<td>🌐 Enhanced Knowledge Driven Question Answering
</td>
<td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🧠 Better Reasoning
</td>
<td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🤖 Superior Agentic Capabilities
</td>
<td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
</td>
<tr>
<tr>
<td>🚫 Reduced Hallucinations
</td>
<td>TruthfulQA (+9%)
</td>
<tr>
</table>
**Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
## Llama-3.1-Storm-8B Models
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
## 💻 How to Use the Model
The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results.
### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```
Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
### Conversational Use-case
#### Use with [🤗 Transformers](https://github.com/huggingface/transformers)
##### Using `transformers.pipeline()` API
```python
import transformers
import torch
model_id = "akjindal53244/Llama-3.1-Storm-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
```
##### Using `model.generate()` API
```bash
pip install flash_attn==2.6.3
```
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Apply Llama3.1 chat-template
def format_prompt(user_query):
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
return template.format(user_query)
model_id = 'akjindal53244/Llama-3.1-Storm-8B'
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=False,
use_flash_attention_2=True
)
# Build final input prompt after applying chat-template
prompt = format_prompt("What is 2+2?")
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response) # Expected Output: '2 + 2 = 4'
```
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
```
#### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
```bash
pip install 'litgpt[all]'
litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B
```
```python
from litgpt import LLM
llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B")
llm.generate("What do Llamas eat?")
```
### Function Calling Use-case
[**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
#### Prompt Format for Function Calling
Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
```
You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>LIST_OF_TOOLS</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
```
Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
import json
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
def create_system_prompt(tools_list):
system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>{}</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
# Convert the tools list to a string representation
tools_str = json.dumps(tools_list, ensure_ascii=False)
# Format the system prompt with the tools list
system_prompt = system_prompt_format.format(tools_str)
return system_prompt
# Example tools list
tools_list = [
{
"name": "peers",
"description": "Retrieves a list of company peers given a stock symbol.",
"parameters": {
"symbol": {
"description": "The stock symbol for the company.",
"type": "str",
"default": ""
}
}
},
{
"name": "web_chain_details",
"description": "python",
"parameters": {
"chain_slug": {
"description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
"type": "str",
"default": "ethereum"
}
}
}
]
# Create the system prompt with the tools list
system_prompt = create_system_prompt(tools_list)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
```
#### Use with [Ollama](https://ollama.com/)
```
import ollama
tools = [{
'type': 'function',
'function': {
'name': 'get_current_weather',
'description': 'Get the current weather for a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
{
'type': 'function',
'function': {
'name': 'get_places_to_vist',
'description': 'Get places to visit in a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
]
response = ollama.chat(
model='ajindal/llama3.1-storm:8b',
messages=[
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
],
tools=tools
)
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
```
## Alignment Note
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
## Cite Our Work
```
@misc {ashvini_kumar_jindal_2024,
author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
title = { Llama-3.1-Storm-8B },
year = 2024,
url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
doi = { 10.57967/hf/2902 },
publisher = { Hugging Face }
}
```
## Support Our Work
With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
**Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.** |
koshimaki/dinosiglip-224px-1b-pref | koshimaki | 2024-11-01T03:11:48Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"prismatic",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-11-01T03:09:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gpustack/bce-embedding-base_v1-GGUF | gpustack | 2024-11-01T03:02:40Z | 472 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-31T15:37:54Z | ---
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- en
- zh
---
# bce-embedding-base_v1-GGUF
**Model creator**: [maidalun1020](https://huggingface.co/maidalun1020)<br/>
**Original model**: [maidalun1020/bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!--
* @Description:
* @Author: shenlei
* @Date: 2023-12-19 10:31:41
* @LastEditTime: 2024-01-09 23:52:00
* @LastEditors: shenlei
-->
<h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
<p align="center">
<a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
<img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
</a>
<a href="https://twitter.com/YDopensource">
<img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}">
</a>
</p>
最新、最详细的bce-embedding-base_v1相关信息,请移步(The latest "Updates" should be checked in):
<p align="left">
<a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a>
</p>
## 主要特点(Key Features):
- 中英双语,以及中英跨语种能力(Bilingual and Crosslingual capability in English and Chinese);
- RAG优化,适配更多真实业务场景(RAG adaptation for more domains, including Education, Law, Finance, Medical, Literature, FAQ, Textbook, Wikipedia, etc.);
- 方便集成进langchain和llamaindex(Easy integrations for langchain and llamaindex in <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>)。
- `EmbeddingModel`不需要“精心设计”instruction,尽可能召回有用片段。 (No need for "instruction")
- **最佳实践(Best practice)** :embedding召回top50-100片段,reranker对这50-100片段精排,最后取top5-10片段。(1. Get top 50-100 passages with [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) for "`recall`"; 2. Rerank passages with [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) and get top 5-10 for "`precision`" finally. )
## News:
- `BCEmbedding`技术博客( **Technical Blog** ): [为RAG而生-BCEmbedding技术报告](https://zhuanlan.zhihu.com/p/681370855)
- Related link for **RerankerModel** : [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)
## Third-party Examples:
- RAG applications: [QAnything](https://github.com/netease-youdao/qanything), [HuixiangDou](https://github.com/InternLM/HuixiangDou), [ChatPDF](https://github.com/shibing624/ChatPDF).
- Efficient inference framework: [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), [Xinference](https://github.com/xorbitsai/inference), [mindnlp (Huawei GPU, 华为GPU)](https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/bce).


-----------------------------------------
<details open="open">
<summary>Click to Open Contents</summary>
- <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a>
- <a href="#-key-features" target="_Self">💡 Key Features</a>
- <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a>
- <a href="#-model-list" target="_Self">🍎 Model List</a>
- <a href="#-manual" target="_Self">📖 Manual</a>
- <a href="#installation" target="_Self">Installation</a>
- <a href="#quick-start" target="_Self">Quick Start (`transformers`, `sentence-transformers`)</a>
- <a href="#integrations-for-rag-frameworks" target="_Self">Integrations for RAG Frameworks (`langchain`, `llama_index`)</a>
- <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a>
- <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a>
- <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a>
- <a href="#-leaderboard" target="_Self">📈 Leaderboard</a>
- <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a>
- <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a>
- <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a>
- <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a>
- <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a>
- <a href="#-license" target="_Self">🔐 License</a>
- <a href="#-related-links" target="_Self">🔗 Related Links</a>
</details>
<br>
**B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks.
`BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation).
Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves
- **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**;
- **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**.
`BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。
`BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation)
`BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现:
- **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。**
- **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。**
## 🌐 Bilingual and Crosslingual Superiority
Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings.
`EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***.
现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。
`EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。
## 💡 Key Features
- **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages.
- **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>.
- **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage.
- **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields.
- **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task.
- **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance.
- **Proven in Production**: Successfully implemented and validated in Youdao's products.
- **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。
- **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。
- **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。
- **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。
- **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。
- **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。
- **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。
## 🚀 Latest Updates
- ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
- ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
- ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
- ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
- ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
- ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
## 🍎 Model List
| Model Name | Model Type | Languages | Parameters | Weights |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|
| bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) |
| bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) |
## 📖 Manual
### Installation
First, create a conda environment and activate it.
```bash
conda create --name bce python=3.10 -y
conda activate bce
```
Then install `BCEmbedding` for minimal installation:
```bash
pip install BCEmbedding==0.1.1
```
Or install from source:
```bash
git clone [email protected]:netease-youdao/BCEmbedding.git
cd BCEmbedding
pip install -v -e .
```
### Quick Start
#### 1. Based on `BCEmbedding`
Use `EmbeddingModel`, and `cls` [pooler](./BCEmbedding/models/embedding.py#L24) is default.
```python
from BCEmbedding import EmbeddingModel
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences)
```
Use `RerankerModel` to calculate relevant scores and rerank:
```python
from BCEmbedding import RerankerModel
# your query and corresponding passages
query = 'input_query'
passages = ['passage_0', 'passage_1', ...]
# construct sentence pairs
sentence_pairs = [[query, passage] for passage in passages]
# init reranker model
model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1")
# method 0: calculate scores of sentence pairs
scores = model.compute_score(sentence_pairs)
# method 1: rerank passages
rerank_results = model.rerank(query, passages)
```
NOTE:
- In [`RerankerModel.rerank`](./BCEmbedding/models/reranker.py#L137) method, we provide an advanced preproccess that we use in production for making `sentence_pairs`, when "passages" are very long.
#### 2. Based on `transformers`
For `EmbeddingModel`:
```python
from transformers import AutoModel, AutoTokenizer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(self.device) for k, v in inputs.items()}
# get embeddings
outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0] # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
```
For `RerankerModel`:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()}
# calculate scores
scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
scores = torch.sigmoid(scores)
```
#### 3. Based on `sentence_transformers`
For `EmbeddingModel`:
```python
from sentence_transformers import SentenceTransformer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
## New update for sentence-trnasformers. So clean up your "`SENTENCE_TRANSFORMERS_HOME`/maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version.
model = SentenceTransformer("maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences, normalize_embeddings=True)
```
For `RerankerModel`:
```python
from sentence_transformers import CrossEncoder
# init reranker model
model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512)
# calculate scores of sentence pairs
scores = model.predict(sentence_pairs)
```
### Integrations for RAG Frameworks
#### 1. Used in `langchain`
```python
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.vectorstores.utils import DistanceStrategy
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_name = 'maidalun1020/bce-embedding-base_v1'
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'batch_size': 64, 'normalize_embeddings': True, 'show_progress_bar': False}
embed_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# example #1. extract embeddings
query_embedding = embed_model.embed_query(query)
passages_embeddings = embed_model.embed_documents(passages)
# example #2. langchain retriever example
faiss_vectorstore = FAISS.from_texts(passages, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = faiss_vectorstore.as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.5, "k": 3})
related_passages = retriever.get_relevant_documents(query)
```
#### 2. Used in `llama_index`
```python
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
from llama_index.node_parser import SimpleNodeParser
from llama_index.llms import OpenAI
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 64, 'device': 'cuda'}
embed_model = HuggingFaceEmbedding(**model_args)
# example #1. extract embeddings
query_embedding = embed_model.get_query_embedding(query)
passages_embeddings = embed_model.get_text_embedding_batch(passages)
# example #2. rag example
llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
nodes = node_parser.get_nodes_from_documents(documents[0:36])
index = VectorStoreIndex(nodes, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query("What is llama?")
```
## ⚙️ Evaluation
### Evaluate Semantic Representation by MTEB
We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB).
我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具。
#### 1. Embedding Models
Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**双语和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls
```
The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**.
评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。
***NOTE:***
- **All models are evaluated in their recommended pooling method (`pooler`)**.
- `mean` pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large".
- `cls` pooler: Other models.
- "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`.
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
```
***注意:***
- 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large"和"gte-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`.
- "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。
#### 2. Reranker Models
Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在 **双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1
```
The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
评测包含 **"Reranking"** 任务的 ***12个数据集***。
#### 3. Metrics Visualization Tool
We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
```bash
python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
```
### Evaluate RAG by LlamaIndex
[LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`.
[LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。
First, install LlamaIndex:
```bash
pip install llama-index==0.9.22
```
#### 1. Metrics Definition
- Hit Rate:
Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.***
- Mean Reciprocal Rank (MRR):
For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.***
- 命中率(Hit Rate)
命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。***
- 平均倒数排名(Mean Reciprocal Rank,MRR)
对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。***
#### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`:
为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
```
Then, sumarize the evaluation results by:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
```
Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
- 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
#### 3. Broad Domain Adaptability
The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**.
在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。**
First, run following cmd to evaluate the most popular and powerful embedding and reranker models:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
```
Then, run the following script to sumarize the evaluation results:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results
```
The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>.
## 📈 Leaderboard
### Semantic Representation Evaluations in MTEB
#### 1. Embedding Models
| Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | ***AVG*** (119) |
|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| bge-base-en-v1.5 | 768 | `cls` | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
| bge-base-zh-v1.5 | 768 | `cls` | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
| bge-large-en-v1.5 | 1024 | `cls` | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
| bge-large-zh-v1.5 | 1024 | `cls` | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
| e5-large-v2 | 1024 | `mean` | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
| gte-large | 1024 | `mean` | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
| gte-large-zh | 1024 | `cls` | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
| jina-embeddings-v2-base-en | 768 | `mean` | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
| m3e-base | 768 | `mean` | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
| m3e-large | 1024 | `mean` | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
| multilingual-e5-base | 768 | `mean` | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
| multilingual-e5-large | 1024 | `mean` | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
| ***bce-embedding-base_v1*** | 768 | `cls` | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
***NOTE:***
- Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with comparable model size.
- ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
- More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
***要点:***
- 对比其他开源的相同规模的embedding模型,***bce-embedding-base_v1*** 表现最好,效果比最好的large模型稍差。
- 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
- 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
- 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
#### 2. Reranker Models
| Model | Reranking (12) | ***AVG*** (12) |
| :--------------------------------- | :-------------: | :--------------------: |
| bge-reranker-base | 59.04 | 59.04 |
| bge-reranker-large | 60.86 | 60.86 |
| ***bce-reranker-base_v1*** | **61.29** | ***61.29*** |
***NOTE:***
- Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
- ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
***要点:***
- ***bce-reranker-base_v1*** 优于其他开源reranker模型。
- 评测包含 **"Reranking"** 任务的 ***12个数据集***。
- 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
### RAG Evaluations in LlamaIndex
#### 1. Multiple Domains Scenarios

***NOTE:***
- Evaluated in **`["en", "zh", "en-zh", "zh-en"]` setting**.
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**.
***要点:***
- 评测是在`["en", "zh", "en-zh", "zh-en"]`设置下。
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。
- 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
## 🛠 Youdao's BCEmbedding API
For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results.
对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。
## 🧲 WeChat Group
Welcome to scan the QR code below and join the WeChat group.
欢迎大家扫码加入官方微信交流群。

## ✏️ Citation
If you use `BCEmbedding` in your research or project, please feel free to cite and star it:
如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~
```
@misc{youdao_bcembedding_2023,
title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
author={NetEase Youdao, Inc.},
year={2023},
howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
}
```
## 🔐 License
`BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
## 🔗 Related Links
[Netease Youdao - QAnything](https://github.com/netease-youdao/qanything)
[FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
[MTEB](https://github.com/embeddings-benchmark/mteb)
[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
[LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
|
alme/ppo-LunarLander-v2 | alme | 2024-11-01T02:53:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-08T07:19:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 218.67 +/- 95.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/lumimaid-8B-autotrain-i1-GGUF | mradermacher | 2024-11-01T02:41:08Z | 120 | 1 | transformers | [
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"en",
"dataset:mpasila/Literotica-stories-short-json-unfiltered",
"dataset:Chadgpt-fam/sexting_dataset",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-11-01T01:28:36Z | ---
base_model: mrcuddle/lumimaid-8B-autotrain
datasets:
- mpasila/Literotica-stories-short-json-unfiltered
- Chadgpt-fam/sexting_dataset
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mrcuddle/lumimaid-8B-autotrain
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ychu612/RSAVAV_SQ_CLF | ychu612 | 2024-11-01T02:38:11Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-31T22:27:47Z | ---
library_name: transformers
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
model-index:
- name: RSAVAV_SQ_CLF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RSAVAV_SQ_CLF
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
DanJoshua/profesor_Swin3D_S_RWF2000 | DanJoshua | 2024-11-01T02:36:22Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T20:16:35Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: profesor_Swin3D_S_RWF2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# profesor_Swin3D_S_RWF2000
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5293
- Accuracy: 0.89
- F1: 0.8900
- Precision: 0.8902
- Recall: 0.89
- Roc Auc: 0.9532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 480
- training_steps: 4800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 0.2107 | 2.0333 | 480 | 0.3290 | 0.88 | 0.8795 | 0.8865 | 0.88 | 0.9568 |
| 0.1399 | 5.0333 | 960 | 0.4941 | 0.9 | 0.9000 | 0.9002 | 0.9 | 0.9642 |
| 0.1221 | 8.0333 | 1440 | 0.4824 | 0.8975 | 0.8974 | 0.8983 | 0.8975 | 0.9675 |
| 0.1474 | 11.0333 | 1920 | 0.5392 | 0.8975 | 0.8975 | 0.8975 | 0.8975 | 0.9665 |
| 0.105 | 14.0333 | 2400 | 0.7004 | 0.895 | 0.8948 | 0.8982 | 0.895 | 0.9686 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
Panav77/sd-class-butterflies-32 | Panav77 | 2024-11-01T02:33:13Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-11-01T02:33:00Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Panav77/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
pwork7/rlhflow_mix_dart_code_v1_iter2 | pwork7 | 2024-11-01T02:31:05Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T02:27:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gaunernst/bert-L2-H768-uncased | gaunernst | 2024-11-01T02:22:12Z | 246 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1908.08962",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-02T07:26:04Z | ---
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
language:
- en
---
# BERT L2-H768 (uncased)
Mini BERT models from https://arxiv.org/abs/1908.08962 that the HF team didn't convert. The original [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py) is used.
See the original Google repo: [google-research/bert](https://github.com/google-research/bert)
Note: it's not clear if these checkpoints have undergone knowledge distillation.
## Model variants
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[2/128 (BERT-Tiny)][2_128]|[2/256][2_256]|[2/512][2_512]|[**2/768**][2_768]|
| **L=4** |[4/128][4_128]|[4/256 (BERT-Mini)][4_256]|[4/512 (BERT-Small)][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[8/512 (BERT-Medium)][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[12/768 (BERT-Base, original)][12_768]|
[2_128]: https://huggingface.co/gaunernst/bert-tiny-uncased
[2_256]: https://huggingface.co/gaunernst/bert-L2-H256-uncased
[2_512]: https://huggingface.co/gaunernst/bert-L2-H512-uncased
[2_768]: https://huggingface.co/gaunernst/bert-L2-H768-uncased
[4_128]: https://huggingface.co/gaunernst/bert-L4-H128-uncased
[4_256]: https://huggingface.co/gaunernst/bert-mini-uncased
[4_512]: https://huggingface.co/gaunernst/bert-small-uncased
[4_768]: https://huggingface.co/gaunernst/bert-L4-H768-uncased
[6_128]: https://huggingface.co/gaunernst/bert-L6-H128-uncased
[6_256]: https://huggingface.co/gaunernst/bert-L6-H256-uncased
[6_512]: https://huggingface.co/gaunernst/bert-L6-H512-uncased
[6_768]: https://huggingface.co/gaunernst/bert-L6-H768-uncased
[8_128]: https://huggingface.co/gaunernst/bert-L8-H128-uncased
[8_256]: https://huggingface.co/gaunernst/bert-L8-H256-uncased
[8_512]: https://huggingface.co/gaunernst/bert-medium-uncased
[8_768]: https://huggingface.co/gaunernst/bert-L8-H768-uncased
[10_128]: https://huggingface.co/gaunernst/bert-L10-H128-uncased
[10_256]: https://huggingface.co/gaunernst/bert-L10-H256-uncased
[10_512]: https://huggingface.co/gaunernst/bert-L10-H512-uncased
[10_768]: https://huggingface.co/gaunernst/bert-L10-H768-uncased
[12_128]: https://huggingface.co/gaunernst/bert-L12-H128-uncased
[12_256]: https://huggingface.co/gaunernst/bert-L12-H256-uncased
[12_512]: https://huggingface.co/gaunernst/bert-L12-H512-uncased
[12_768]: https://huggingface.co/bert-base-uncased
## Usage
See other BERT model cards e.g. https://huggingface.co/bert-base-uncased
## Citation
```bibtex
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
``` |
gpustack/jina-embeddings-v2-base-zh-GGUF | gpustack | 2024-11-01T02:15:23Z | 571 | 1 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"en",
"zh",
"arxiv:2108.12409",
"arxiv:2402.17016",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | feature-extraction | 2024-11-01T01:35:57Z | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
inference: false
license: apache-2.0
language:
- en
- zh
model-index:
- name: jina-embeddings-v2-base-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 48.51403119231363
- type: cos_sim_spearman
value: 50.5928547846445
- type: euclidean_pearson
value: 48.750436310559074
- type: euclidean_spearman
value: 50.50950238691385
- type: manhattan_pearson
value: 48.7866189440328
- type: manhattan_spearman
value: 50.58692402017165
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 50.25985700105725
- type: cos_sim_spearman
value: 51.28815934593989
- type: euclidean_pearson
value: 52.70329248799904
- type: euclidean_spearman
value: 50.94101139559258
- type: manhattan_pearson
value: 52.6647237400892
- type: manhattan_spearman
value: 50.922441325406176
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 34.944
- type: f1
value: 34.06478860660109
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 65.15667035488342
- type: cos_sim_spearman
value: 66.07110142081
- type: euclidean_pearson
value: 60.447598102249714
- type: euclidean_spearman
value: 61.826575796578766
- type: manhattan_pearson
value: 60.39364279354984
- type: manhattan_spearman
value: 61.78743491223281
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 39.96714175391701
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 38.39863566717934
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 83.63680381780644
- type: mrr
value: 86.16476190476192
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 83.74350667859487
- type: mrr
value: 86.10388888888889
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.072
- type: map_at_10
value: 32.942
- type: map_at_100
value: 34.768
- type: map_at_1000
value: 34.902
- type: map_at_3
value: 29.357
- type: map_at_5
value: 31.236000000000004
- type: mrr_at_1
value: 34.259
- type: mrr_at_10
value: 41.957
- type: mrr_at_100
value: 42.982
- type: mrr_at_1000
value: 43.042
- type: mrr_at_3
value: 39.722
- type: mrr_at_5
value: 40.898
- type: ndcg_at_1
value: 34.259
- type: ndcg_at_10
value: 39.153
- type: ndcg_at_100
value: 46.493
- type: ndcg_at_1000
value: 49.01
- type: ndcg_at_3
value: 34.636
- type: ndcg_at_5
value: 36.278
- type: precision_at_1
value: 34.259
- type: precision_at_10
value: 8.815000000000001
- type: precision_at_100
value: 1.474
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 19.73
- type: precision_at_5
value: 14.174000000000001
- type: recall_at_1
value: 22.072
- type: recall_at_10
value: 48.484
- type: recall_at_100
value: 79.035
- type: recall_at_1000
value: 96.15
- type: recall_at_3
value: 34.607
- type: recall_at_5
value: 40.064
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 76.7047504509922
- type: cos_sim_ap
value: 85.26649874800871
- type: cos_sim_f1
value: 78.13528724646915
- type: cos_sim_precision
value: 71.57587548638132
- type: cos_sim_recall
value: 86.01823708206688
- type: dot_accuracy
value: 70.13830426939266
- type: dot_ap
value: 77.01510412382171
- type: dot_f1
value: 73.56710042713817
- type: dot_precision
value: 63.955094991364426
- type: dot_recall
value: 86.57937806873977
- type: euclidean_accuracy
value: 75.53818400481059
- type: euclidean_ap
value: 84.34668448241264
- type: euclidean_f1
value: 77.51741608613047
- type: euclidean_precision
value: 70.65614777756399
- type: euclidean_recall
value: 85.85457096095394
- type: manhattan_accuracy
value: 75.49007817197835
- type: manhattan_ap
value: 84.40297506704299
- type: manhattan_f1
value: 77.63185324160932
- type: manhattan_precision
value: 70.03949595636637
- type: manhattan_recall
value: 87.07037643207856
- type: max_accuracy
value: 76.7047504509922
- type: max_ap
value: 85.26649874800871
- type: max_f1
value: 78.13528724646915
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 69.178
- type: map_at_10
value: 77.523
- type: map_at_100
value: 77.793
- type: map_at_1000
value: 77.79899999999999
- type: map_at_3
value: 75.878
- type: map_at_5
value: 76.849
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.55
- type: mrr_at_100
value: 77.819
- type: mrr_at_1000
value: 77.826
- type: mrr_at_3
value: 75.957
- type: mrr_at_5
value: 76.916
- type: ndcg_at_1
value: 69.44200000000001
- type: ndcg_at_10
value: 81.217
- type: ndcg_at_100
value: 82.45
- type: ndcg_at_1000
value: 82.636
- type: ndcg_at_3
value: 77.931
- type: ndcg_at_5
value: 79.655
- type: precision_at_1
value: 69.44200000000001
- type: precision_at_10
value: 9.357
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.1
- type: precision_at_5
value: 17.724
- type: recall_at_1
value: 69.178
- type: recall_at_10
value: 92.624
- type: recall_at_100
value: 98.209
- type: recall_at_1000
value: 99.684
- type: recall_at_3
value: 83.772
- type: recall_at_5
value: 87.882
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.163999999999998
- type: map_at_10
value: 76.386
- type: map_at_100
value: 79.339
- type: map_at_1000
value: 79.39500000000001
- type: map_at_3
value: 52.959
- type: map_at_5
value: 66.59
- type: mrr_at_1
value: 87.9
- type: mrr_at_10
value: 91.682
- type: mrr_at_100
value: 91.747
- type: mrr_at_1000
value: 91.751
- type: mrr_at_3
value: 91.267
- type: mrr_at_5
value: 91.527
- type: ndcg_at_1
value: 87.9
- type: ndcg_at_10
value: 84.569
- type: ndcg_at_100
value: 87.83800000000001
- type: ndcg_at_1000
value: 88.322
- type: ndcg_at_3
value: 83.473
- type: ndcg_at_5
value: 82.178
- type: precision_at_1
value: 87.9
- type: precision_at_10
value: 40.605000000000004
- type: precision_at_100
value: 4.752
- type: precision_at_1000
value: 0.488
- type: precision_at_3
value: 74.9
- type: precision_at_5
value: 62.96000000000001
- type: recall_at_1
value: 25.163999999999998
- type: recall_at_10
value: 85.97399999999999
- type: recall_at_100
value: 96.63000000000001
- type: recall_at_1000
value: 99.016
- type: recall_at_3
value: 55.611999999999995
- type: recall_at_5
value: 71.936
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 48.6
- type: map_at_10
value: 58.831
- type: map_at_100
value: 59.427
- type: map_at_1000
value: 59.44199999999999
- type: map_at_3
value: 56.383
- type: map_at_5
value: 57.753
- type: mrr_at_1
value: 48.6
- type: mrr_at_10
value: 58.831
- type: mrr_at_100
value: 59.427
- type: mrr_at_1000
value: 59.44199999999999
- type: mrr_at_3
value: 56.383
- type: mrr_at_5
value: 57.753
- type: ndcg_at_1
value: 48.6
- type: ndcg_at_10
value: 63.951
- type: ndcg_at_100
value: 66.72200000000001
- type: ndcg_at_1000
value: 67.13900000000001
- type: ndcg_at_3
value: 58.882
- type: ndcg_at_5
value: 61.373
- type: precision_at_1
value: 48.6
- type: precision_at_10
value: 8.01
- type: precision_at_100
value: 0.928
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 22.033
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 48.6
- type: recall_at_10
value: 80.10000000000001
- type: recall_at_100
value: 92.80000000000001
- type: recall_at_1000
value: 96.1
- type: recall_at_3
value: 66.10000000000001
- type: recall_at_5
value: 72.2
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 47.36437091188918
- type: f1
value: 36.60946954228577
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 79.5684803001876
- type: ap
value: 42.671935929201524
- type: f1
value: 73.31912729103752
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 68.62670112113864
- type: cos_sim_spearman
value: 75.74009123170768
- type: euclidean_pearson
value: 73.93002595958237
- type: euclidean_spearman
value: 75.35222935003587
- type: manhattan_pearson
value: 73.89870445158144
- type: manhattan_spearman
value: 75.31714936339398
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.5372713650176
- type: mrr
value: 30.163095238095238
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 65.054
- type: map_at_10
value: 74.156
- type: map_at_100
value: 74.523
- type: map_at_1000
value: 74.535
- type: map_at_3
value: 72.269
- type: map_at_5
value: 73.41
- type: mrr_at_1
value: 67.24900000000001
- type: mrr_at_10
value: 74.78399999999999
- type: mrr_at_100
value: 75.107
- type: mrr_at_1000
value: 75.117
- type: mrr_at_3
value: 73.13499999999999
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 67.24900000000001
- type: ndcg_at_10
value: 77.96300000000001
- type: ndcg_at_100
value: 79.584
- type: ndcg_at_1000
value: 79.884
- type: ndcg_at_3
value: 74.342
- type: ndcg_at_5
value: 76.278
- type: precision_at_1
value: 67.24900000000001
- type: precision_at_10
value: 9.466
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 27.955999999999996
- type: precision_at_5
value: 17.817
- type: recall_at_1
value: 65.054
- type: recall_at_10
value: 89.113
- type: recall_at_100
value: 96.369
- type: recall_at_1000
value: 98.714
- type: recall_at_3
value: 79.45400000000001
- type: recall_at_5
value: 84.06
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.1977135171486
- type: f1
value: 67.23114308718404
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.92669804976462
- type: f1
value: 72.90628475628779
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 49.2
- type: map_at_10
value: 54.539
- type: map_at_100
value: 55.135
- type: map_at_1000
value: 55.19199999999999
- type: map_at_3
value: 53.383
- type: map_at_5
value: 54.142999999999994
- type: mrr_at_1
value: 49.2
- type: mrr_at_10
value: 54.539
- type: mrr_at_100
value: 55.135999999999996
- type: mrr_at_1000
value: 55.19199999999999
- type: mrr_at_3
value: 53.383
- type: mrr_at_5
value: 54.142999999999994
- type: ndcg_at_1
value: 49.2
- type: ndcg_at_10
value: 57.123000000000005
- type: ndcg_at_100
value: 60.21300000000001
- type: ndcg_at_1000
value: 61.915
- type: ndcg_at_3
value: 54.772
- type: ndcg_at_5
value: 56.157999999999994
- type: precision_at_1
value: 49.2
- type: precision_at_10
value: 6.52
- type: precision_at_100
value: 0.8009999999999999
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 19.6
- type: precision_at_5
value: 12.44
- type: recall_at_1
value: 49.2
- type: recall_at_10
value: 65.2
- type: recall_at_100
value: 80.10000000000001
- type: recall_at_1000
value: 93.89999999999999
- type: recall_at_3
value: 58.8
- type: recall_at_5
value: 62.2
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 63.29333333333334
- type: f1
value: 63.03293854259612
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 75.69030860855442
- type: cos_sim_ap
value: 80.6157833772759
- type: cos_sim_f1
value: 77.87524366471735
- type: cos_sim_precision
value: 72.3076923076923
- type: cos_sim_recall
value: 84.37170010559663
- type: dot_accuracy
value: 67.78559826746074
- type: dot_ap
value: 72.00871467527499
- type: dot_f1
value: 72.58722247394654
- type: dot_precision
value: 63.57142857142857
- type: dot_recall
value: 84.58289334741288
- type: euclidean_accuracy
value: 75.20303194369248
- type: euclidean_ap
value: 80.98587256415605
- type: euclidean_f1
value: 77.26396917148362
- type: euclidean_precision
value: 71.03631532329496
- type: euclidean_recall
value: 84.68848996832101
- type: manhattan_accuracy
value: 75.20303194369248
- type: manhattan_ap
value: 80.93460699513219
- type: manhattan_f1
value: 77.124773960217
- type: manhattan_precision
value: 67.43083003952569
- type: manhattan_recall
value: 90.07391763463569
- type: max_accuracy
value: 75.69030860855442
- type: max_ap
value: 80.98587256415605
- type: max_f1
value: 77.87524366471735
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 87.00000000000001
- type: ap
value: 83.24372135949511
- type: f1
value: 86.95554191530607
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 37.57616811591219
- type: cos_sim_spearman
value: 41.490259084930045
- type: euclidean_pearson
value: 38.9155043692188
- type: euclidean_spearman
value: 39.16056534305623
- type: manhattan_pearson
value: 38.76569892264335
- type: manhattan_spearman
value: 38.99891685590743
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 35.44858610359665
- type: cos_sim_spearman
value: 38.11128146262466
- type: euclidean_pearson
value: 31.928644189822457
- type: euclidean_spearman
value: 34.384936631696554
- type: manhattan_pearson
value: 31.90586687414376
- type: manhattan_spearman
value: 34.35770153777186
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.54931957553592
- type: cos_sim_spearman
value: 69.25068863016632
- type: euclidean_pearson
value: 50.26525596106869
- type: euclidean_spearman
value: 63.83352741910006
- type: manhattan_pearson
value: 49.98798282198196
- type: manhattan_spearman
value: 63.87649521907841
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.52782476625825
- type: cos_sim_spearman
value: 82.55618986168398
- type: euclidean_pearson
value: 78.48190631687673
- type: euclidean_spearman
value: 78.39479731354655
- type: manhattan_pearson
value: 78.51176592165885
- type: manhattan_spearman
value: 78.42363787303265
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 67.36693873615643
- type: mrr
value: 77.83847701797939
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.795
- type: map_at_10
value: 72.258
- type: map_at_100
value: 76.049
- type: map_at_1000
value: 76.134
- type: map_at_3
value: 50.697
- type: map_at_5
value: 62.324999999999996
- type: mrr_at_1
value: 86.634
- type: mrr_at_10
value: 89.792
- type: mrr_at_100
value: 89.91900000000001
- type: mrr_at_1000
value: 89.923
- type: mrr_at_3
value: 89.224
- type: mrr_at_5
value: 89.608
- type: ndcg_at_1
value: 86.634
- type: ndcg_at_10
value: 80.589
- type: ndcg_at_100
value: 84.812
- type: ndcg_at_1000
value: 85.662
- type: ndcg_at_3
value: 82.169
- type: ndcg_at_5
value: 80.619
- type: precision_at_1
value: 86.634
- type: precision_at_10
value: 40.389
- type: precision_at_100
value: 4.93
- type: precision_at_1000
value: 0.513
- type: precision_at_3
value: 72.104
- type: precision_at_5
value: 60.425
- type: recall_at_1
value: 25.795
- type: recall_at_10
value: 79.565
- type: recall_at_100
value: 93.24799999999999
- type: recall_at_1000
value: 97.595
- type: recall_at_3
value: 52.583999999999996
- type: recall_at_5
value: 66.175
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 47.648999999999994
- type: f1
value: 46.28925837008413
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 54.07641891287953
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 53.423702062353954
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 55.7
- type: map_at_10
value: 65.923
- type: map_at_100
value: 66.42
- type: map_at_1000
value: 66.431
- type: map_at_3
value: 63.9
- type: map_at_5
value: 65.225
- type: mrr_at_1
value: 55.60000000000001
- type: mrr_at_10
value: 65.873
- type: mrr_at_100
value: 66.36999999999999
- type: mrr_at_1000
value: 66.381
- type: mrr_at_3
value: 63.849999999999994
- type: mrr_at_5
value: 65.17500000000001
- type: ndcg_at_1
value: 55.7
- type: ndcg_at_10
value: 70.621
- type: ndcg_at_100
value: 72.944
- type: ndcg_at_1000
value: 73.25399999999999
- type: ndcg_at_3
value: 66.547
- type: ndcg_at_5
value: 68.93599999999999
- type: precision_at_1
value: 55.7
- type: precision_at_10
value: 8.52
- type: precision_at_100
value: 0.958
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.733
- type: precision_at_5
value: 16
- type: recall_at_1
value: 55.7
- type: recall_at_10
value: 85.2
- type: recall_at_100
value: 95.8
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 74.2
- type: recall_at_5
value: 80
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.54
- type: ap
value: 66.13603199670062
- type: f1
value: 82.61420654584116
---
# jina-embeddings-v2-base-zh-GGUF
**Model creator**: [jinaai](https://huggingface.co/jinaai)<br/>
**Original model**: [jina-embeddings-v2-base-zh](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using `jina-embeddings-v2-base-zh` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/).
## Intended Usage & Model Info
`jina-embeddings-v2-base-zh` is a Chinese/English bilingual text **embedding model** supporting **8192 sequence length**.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed Chinese-English input without bias.
Additionally, we provide the following embedding models:
`jina-embeddings-v2-base-zh` 是支持中英双语的**文本向量**模型,它支持长达**8192字符**的文本编码。
该模型的研发基于BERT架构(JinaBERT),JinaBERT是在BERT架构基础上的改进,首次将[ALiBi](https://arxiv.org/abs/2108.12409)应用到编码器架构中以支持更长的序列。
不同于以往的单语言/多语言向量模型,我们设计双语模型来更好的支持单语言(中搜中)以及跨语言(中搜英)文档检索。
除此之外,我们也提供其它向量模型:
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings **(you are here)**.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings.
- [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon).
- [`jina-embeddings-v2-base-code`](https://huggingface.co/jinaai/jina-embeddings-v2-base-code): 161 million parameters code embeddings.
## Data & Parameters
The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016).
## Usage
**<details><summary>Please apply mean pooling when integrating the model.</summary>**
<p>
### Why mean pooling?
`mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an `encode` function to deal with this.
However, if you would like to do it without using the default `encode` function:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['How is the weather today?', '今天天气怎么样?']
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-zh')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-zh', trust_remote_code=True, torch_dtype=torch.bfloat16)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
You can use Jina Embedding models directly from transformers package.
```python
!pip install transformers
import torch
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-zh', trust_remote_code=True, torch_dtype=torch.bfloat16)
embeddings = model.encode(['How is the weather today?', '今天天气怎么样?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
If you want to use the model together with the [sentence-transformers package](https://github.com/UKPLab/sentence-transformers/), make sure that you have installed the latest release and set `trust_remote_code=True` as well:
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = SentenceTransformer('jinaai/jina-embeddings-v2-base-zh', trust_remote_code=True)
embeddings = model.encode(['How is the weather today?', '今天天气怎么样?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-zh", # switch to en/zh for English or Chinese
trust_remote_code=True
)
# control your input sequence length up to 8192
model.max_seq_length = 1024
embeddings = model.encode([
'How is the weather today?',
'今天天气怎么样?'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
## Alternatives to Using Transformers Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Trouble Shooting
**Loading of Model Code failed**
If you forgot to pass the `trust_remote_code=True` flag when calling `AutoModel.from_pretrained` or initializing the model via the `SentenceTransformer` class, you will receive an error that the model weights could not be initialized.
This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:
```bash
Some weights of the model checkpoint at jinaai/jina-embeddings-v2-base-zh were not used when initializing BertModel: ['encoder.layer.2.mlp.layernorm.weight', 'encoder.layer.3.mlp.layernorm.weight', 'encoder.layer.10.mlp.wo.bias', 'encoder.layer.5.mlp.wo.bias', 'encoder.layer.2.mlp.layernorm.bias', 'encoder.layer.1.mlp.gated_layers.weight', 'encoder.layer.5.mlp.gated_layers.weight', 'encoder.layer.8.mlp.layernorm.bias', ...
```
**User is not logged into Huggingface**
The model is only availabe under [gated access](https://huggingface.co/docs/hub/models-gated).
This means you need to be logged into huggingface load load it.
If you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:
```bash
OSError: jinaai/jina-embeddings-v2-base-zh is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@article{mohr2024multi,
title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings},
author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others},
journal={arXiv preprint arXiv:2402.17016},
year={2024}
}
```
|
TheHamzahPOCs/bart-cnn-samsum-finetuned | TheHamzahPOCs | 2024-11-01T02:08:38Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-01T02:07:06Z | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1908 | 1.0 | 19 | 0.2608 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
gpustack/jina-embeddings-v2-base-en-GGUF | gpustack | 2024-11-01T02:01:38Z | 216 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:allenai/c4",
"arxiv:2108.12409",
"arxiv:2310.19923",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | feature-extraction | 2024-11-01T01:35:36Z | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
datasets:
- allenai/c4
language: en
inference: false
license: apache-2.0
model-index:
- name: jina-embedding-b-en-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.73134328358209
- type: ap
value: 37.765427081831035
- type: f1
value: 68.79367444339518
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.544275
- type: ap
value: 84.61328675662887
- type: f1
value: 88.51879035862375
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.263999999999996
- type: f1
value: 43.778759656699435
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.693
- type: map_at_10
value: 35.487
- type: map_at_100
value: 36.862
- type: map_at_1000
value: 36.872
- type: map_at_3
value: 30.049999999999997
- type: map_at_5
value: 32.966
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 35.565999999999995
- type: mrr_at_100
value: 36.948
- type: mrr_at_1000
value: 36.958
- type: mrr_at_3
value: 30.121
- type: mrr_at_5
value: 33.051
- type: ndcg_at_1
value: 21.693
- type: ndcg_at_10
value: 44.181
- type: ndcg_at_100
value: 49.982
- type: ndcg_at_1000
value: 50.233000000000004
- type: ndcg_at_3
value: 32.830999999999996
- type: ndcg_at_5
value: 38.080000000000005
- type: precision_at_1
value: 21.693
- type: precision_at_10
value: 7.248
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 13.632
- type: precision_at_5
value: 10.725
- type: recall_at_1
value: 21.693
- type: recall_at_10
value: 72.475
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 40.896
- type: recall_at_5
value: 53.627
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.39242428696777
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.675626784714
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.247725694904034
- type: mrr
value: 74.91359978894604
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.68003802970496
- type: cos_sim_spearman
value: 81.23438110096286
- type: euclidean_pearson
value: 81.87462986142582
- type: euclidean_spearman
value: 81.23438110096286
- type: manhattan_pearson
value: 81.61162566600755
- type: manhattan_spearman
value: 81.11329400456184
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.01298701298701
- type: f1
value: 83.31690714969382
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.050108150972086
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.15731442819715
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.391999999999996
- type: map_at_10
value: 42.597
- type: map_at_100
value: 44.07
- type: map_at_1000
value: 44.198
- type: map_at_3
value: 38.957
- type: map_at_5
value: 40.961
- type: mrr_at_1
value: 37.196
- type: mrr_at_10
value: 48.152
- type: mrr_at_100
value: 48.928
- type: mrr_at_1000
value: 48.964999999999996
- type: mrr_at_3
value: 45.446
- type: mrr_at_5
value: 47.205999999999996
- type: ndcg_at_1
value: 37.196
- type: ndcg_at_10
value: 49.089
- type: ndcg_at_100
value: 54.471000000000004
- type: ndcg_at_1000
value: 56.385
- type: ndcg_at_3
value: 43.699
- type: ndcg_at_5
value: 46.22
- type: precision_at_1
value: 37.196
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 20.839
- type: precision_at_5
value: 14.936
- type: recall_at_1
value: 31.391999999999996
- type: recall_at_10
value: 61.876
- type: recall_at_100
value: 84.214
- type: recall_at_1000
value: 95.985
- type: recall_at_3
value: 46.6
- type: recall_at_5
value: 53.588
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.083
- type: map_at_10
value: 38.812999999999995
- type: map_at_100
value: 40.053
- type: map_at_1000
value: 40.188
- type: map_at_3
value: 36.111
- type: map_at_5
value: 37.519000000000005
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.85
- type: mrr_at_100
value: 45.546
- type: mrr_at_1000
value: 45.593
- type: mrr_at_3
value: 42.686
- type: mrr_at_5
value: 43.909
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 44.443
- type: ndcg_at_100
value: 48.979
- type: ndcg_at_1000
value: 51.154999999999994
- type: ndcg_at_3
value: 40.660000000000004
- type: ndcg_at_5
value: 42.193000000000005
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 8.433
- type: precision_at_100
value: 1.369
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 19.894000000000002
- type: precision_at_5
value: 13.873
- type: recall_at_1
value: 29.083
- type: recall_at_10
value: 54.313
- type: recall_at_100
value: 73.792
- type: recall_at_1000
value: 87.629
- type: recall_at_3
value: 42.257
- type: recall_at_5
value: 47.066
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.556000000000004
- type: map_at_10
value: 50.698
- type: map_at_100
value: 51.705
- type: map_at_1000
value: 51.768
- type: map_at_3
value: 47.848
- type: map_at_5
value: 49.358000000000004
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 54.191
- type: mrr_at_100
value: 54.852999999999994
- type: mrr_at_1000
value: 54.885
- type: mrr_at_3
value: 51.954
- type: mrr_at_5
value: 53.13
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 56.516
- type: ndcg_at_100
value: 60.477000000000004
- type: ndcg_at_1000
value: 61.746
- type: ndcg_at_3
value: 51.601
- type: ndcg_at_5
value: 53.795
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 9.009
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.989
- type: precision_at_5
value: 15.473
- type: recall_at_1
value: 38.556000000000004
- type: recall_at_10
value: 70.159
- type: recall_at_100
value: 87.132
- type: recall_at_1000
value: 96.16
- type: recall_at_3
value: 56.906
- type: recall_at_5
value: 62.332
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.238
- type: map_at_10
value: 32.5
- type: map_at_100
value: 33.637
- type: map_at_1000
value: 33.719
- type: map_at_3
value: 30.026999999999997
- type: map_at_5
value: 31.555
- type: mrr_at_1
value: 26.328000000000003
- type: mrr_at_10
value: 34.44
- type: mrr_at_100
value: 35.455999999999996
- type: mrr_at_1000
value: 35.521
- type: mrr_at_3
value: 32.034
- type: mrr_at_5
value: 33.565
- type: ndcg_at_1
value: 26.328000000000003
- type: ndcg_at_10
value: 37.202
- type: ndcg_at_100
value: 42.728
- type: ndcg_at_1000
value: 44.792
- type: ndcg_at_3
value: 32.368
- type: ndcg_at_5
value: 35.008
- type: precision_at_1
value: 26.328000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.8880000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.672
- type: precision_at_5
value: 9.74
- type: recall_at_1
value: 24.238
- type: recall_at_10
value: 49.829
- type: recall_at_100
value: 75.21
- type: recall_at_1000
value: 90.521
- type: recall_at_3
value: 36.867
- type: recall_at_5
value: 43.241
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.378
- type: map_at_10
value: 22.817999999999998
- type: map_at_100
value: 23.977999999999998
- type: map_at_1000
value: 24.108
- type: map_at_3
value: 20.719
- type: map_at_5
value: 21.889
- type: mrr_at_1
value: 19.03
- type: mrr_at_10
value: 27.022000000000002
- type: mrr_at_100
value: 28.011999999999997
- type: mrr_at_1000
value: 28.096
- type: mrr_at_3
value: 24.855
- type: mrr_at_5
value: 26.029999999999998
- type: ndcg_at_1
value: 19.03
- type: ndcg_at_10
value: 27.526
- type: ndcg_at_100
value: 33.040000000000006
- type: ndcg_at_1000
value: 36.187000000000005
- type: ndcg_at_3
value: 23.497
- type: ndcg_at_5
value: 25.334
- type: precision_at_1
value: 19.03
- type: precision_at_10
value: 4.963
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.360000000000001
- type: precision_at_5
value: 8.134
- type: recall_at_1
value: 15.378
- type: recall_at_10
value: 38.061
- type: recall_at_100
value: 61.754
- type: recall_at_1000
value: 84.259
- type: recall_at_3
value: 26.788
- type: recall_at_5
value: 31.326999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.511999999999997
- type: map_at_10
value: 37.429
- type: map_at_100
value: 38.818000000000005
- type: map_at_1000
value: 38.924
- type: map_at_3
value: 34.625
- type: map_at_5
value: 36.064
- type: mrr_at_1
value: 33.300999999999995
- type: mrr_at_10
value: 43.036
- type: mrr_at_100
value: 43.894
- type: mrr_at_1000
value: 43.936
- type: mrr_at_3
value: 40.825
- type: mrr_at_5
value: 42.028
- type: ndcg_at_1
value: 33.300999999999995
- type: ndcg_at_10
value: 43.229
- type: ndcg_at_100
value: 48.992000000000004
- type: ndcg_at_1000
value: 51.02100000000001
- type: ndcg_at_3
value: 38.794000000000004
- type: ndcg_at_5
value: 40.65
- type: precision_at_1
value: 33.300999999999995
- type: precision_at_10
value: 7.777000000000001
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.351
- type: precision_at_5
value: 12.762
- type: recall_at_1
value: 27.511999999999997
- type: recall_at_10
value: 54.788000000000004
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 92.49199999999999
- type: recall_at_3
value: 41.924
- type: recall_at_5
value: 47.026
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.117
- type: map_at_10
value: 33.32
- type: map_at_100
value: 34.677
- type: map_at_1000
value: 34.78
- type: map_at_3
value: 30.233999999999998
- type: map_at_5
value: 31.668000000000003
- type: mrr_at_1
value: 29.566
- type: mrr_at_10
value: 38.244
- type: mrr_at_100
value: 39.245000000000005
- type: mrr_at_1000
value: 39.296
- type: mrr_at_3
value: 35.864000000000004
- type: mrr_at_5
value: 36.919999999999995
- type: ndcg_at_1
value: 29.566
- type: ndcg_at_10
value: 39.127
- type: ndcg_at_100
value: 44.989000000000004
- type: ndcg_at_1000
value: 47.189
- type: ndcg_at_3
value: 34.039
- type: ndcg_at_5
value: 35.744
- type: precision_at_1
value: 29.566
- type: precision_at_10
value: 7.385999999999999
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.286
- type: precision_at_5
value: 11.484
- type: recall_at_1
value: 24.117
- type: recall_at_10
value: 51.559999999999995
- type: recall_at_100
value: 77.104
- type: recall_at_1000
value: 91.79899999999999
- type: recall_at_3
value: 36.82
- type: recall_at_5
value: 41.453
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.17625
- type: map_at_10
value: 34.063916666666664
- type: map_at_100
value: 35.255500000000005
- type: map_at_1000
value: 35.37275
- type: map_at_3
value: 31.351666666666667
- type: map_at_5
value: 32.80608333333333
- type: mrr_at_1
value: 29.59783333333333
- type: mrr_at_10
value: 38.0925
- type: mrr_at_100
value: 38.957249999999995
- type: mrr_at_1000
value: 39.01608333333333
- type: mrr_at_3
value: 35.77625
- type: mrr_at_5
value: 37.04991666666667
- type: ndcg_at_1
value: 29.59783333333333
- type: ndcg_at_10
value: 39.343666666666664
- type: ndcg_at_100
value: 44.488249999999994
- type: ndcg_at_1000
value: 46.83358333333334
- type: ndcg_at_3
value: 34.69708333333333
- type: ndcg_at_5
value: 36.75075
- type: precision_at_1
value: 29.59783333333333
- type: precision_at_10
value: 6.884083333333332
- type: precision_at_100
value: 1.114
- type: precision_at_1000
value: 0.15108333333333332
- type: precision_at_3
value: 15.965250000000003
- type: precision_at_5
value: 11.246500000000001
- type: recall_at_1
value: 25.17625
- type: recall_at_10
value: 51.015999999999984
- type: recall_at_100
value: 73.60174999999998
- type: recall_at_1000
value: 89.849
- type: recall_at_3
value: 37.88399999999999
- type: recall_at_5
value: 43.24541666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.537
- type: map_at_10
value: 31.081999999999997
- type: map_at_100
value: 32.042
- type: map_at_1000
value: 32.141
- type: map_at_3
value: 29.137
- type: map_at_5
value: 30.079
- type: mrr_at_1
value: 27.454
- type: mrr_at_10
value: 33.694
- type: mrr_at_100
value: 34.579
- type: mrr_at_1000
value: 34.649
- type: mrr_at_3
value: 32.004
- type: mrr_at_5
value: 32.794000000000004
- type: ndcg_at_1
value: 27.454
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.641
- type: ndcg_at_1000
value: 42.105
- type: ndcg_at_3
value: 31.276
- type: ndcg_at_5
value: 32.65
- type: precision_at_1
value: 27.454
- type: precision_at_10
value: 5.337
- type: precision_at_100
value: 0.8250000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 24.537
- type: recall_at_10
value: 44.324999999999996
- type: recall_at_100
value: 65.949
- type: recall_at_1000
value: 84.017
- type: recall_at_3
value: 33.857
- type: recall_at_5
value: 37.316
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.122
- type: map_at_10
value: 24.32
- type: map_at_100
value: 25.338
- type: map_at_1000
value: 25.462
- type: map_at_3
value: 22.064
- type: map_at_5
value: 23.322000000000003
- type: mrr_at_1
value: 20.647
- type: mrr_at_10
value: 27.858
- type: mrr_at_100
value: 28.743999999999996
- type: mrr_at_1000
value: 28.819
- type: mrr_at_3
value: 25.769
- type: mrr_at_5
value: 26.964
- type: ndcg_at_1
value: 20.647
- type: ndcg_at_10
value: 28.849999999999998
- type: ndcg_at_100
value: 33.849000000000004
- type: ndcg_at_1000
value: 36.802
- type: ndcg_at_3
value: 24.799
- type: ndcg_at_5
value: 26.682
- type: precision_at_1
value: 20.647
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.906
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 11.769
- type: precision_at_5
value: 8.486
- type: recall_at_1
value: 17.122
- type: recall_at_10
value: 38.999
- type: recall_at_100
value: 61.467000000000006
- type: recall_at_1000
value: 82.716
- type: recall_at_3
value: 27.601
- type: recall_at_5
value: 32.471
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.396
- type: map_at_10
value: 33.415
- type: map_at_100
value: 34.521
- type: map_at_1000
value: 34.631
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 32.166
- type: mrr_at_1
value: 28.825
- type: mrr_at_10
value: 37.397000000000006
- type: mrr_at_100
value: 38.286
- type: mrr_at_1000
value: 38.346000000000004
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.32
- type: ndcg_at_1
value: 28.825
- type: ndcg_at_10
value: 38.656
- type: ndcg_at_100
value: 43.856
- type: ndcg_at_1000
value: 46.31
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.909
- type: precision_at_1
value: 28.825
- type: precision_at_10
value: 6.567
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 15.516
- type: precision_at_5
value: 10.914
- type: recall_at_1
value: 24.396
- type: recall_at_10
value: 50.747
- type: recall_at_100
value: 73.477
- type: recall_at_1000
value: 90.801
- type: recall_at_3
value: 37.1
- type: recall_at_5
value: 42.589
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.072
- type: map_at_10
value: 34.307
- type: map_at_100
value: 35.725
- type: map_at_1000
value: 35.943999999999996
- type: map_at_3
value: 30.906
- type: map_at_5
value: 32.818000000000005
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.673
- type: mrr_at_100
value: 39.459
- type: mrr_at_1000
value: 39.527
- type: mrr_at_3
value: 35.771
- type: mrr_at_5
value: 37.332
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 40.548
- type: ndcg_at_100
value: 45.678999999999995
- type: ndcg_at_1000
value: 48.488
- type: ndcg_at_3
value: 34.887
- type: ndcg_at_5
value: 37.543
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.688000000000001
- type: precision_at_100
value: 1.482
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.016
- type: recall_at_1
value: 25.072
- type: recall_at_10
value: 53.478
- type: recall_at_100
value: 76.07300000000001
- type: recall_at_1000
value: 93.884
- type: recall_at_3
value: 37.583
- type: recall_at_5
value: 44.464
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.712
- type: map_at_10
value: 27.467999999999996
- type: map_at_100
value: 28.502
- type: map_at_1000
value: 28.610000000000003
- type: map_at_3
value: 24.887999999999998
- type: map_at_5
value: 26.273999999999997
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 29.553
- type: mrr_at_100
value: 30.485
- type: mrr_at_1000
value: 30.56
- type: mrr_at_3
value: 27.078999999999997
- type: mrr_at_5
value: 28.401
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 32.023
- type: ndcg_at_100
value: 37.158
- type: ndcg_at_1000
value: 39.823
- type: ndcg_at_3
value: 26.951999999999998
- type: ndcg_at_5
value: 29.281000000000002
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.213
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.244
- type: recall_at_1
value: 20.712
- type: recall_at_10
value: 44.057
- type: recall_at_100
value: 67.944
- type: recall_at_1000
value: 87.925
- type: recall_at_3
value: 30.305
- type: recall_at_5
value: 36.071999999999996
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.181999999999999
- type: map_at_10
value: 16.66
- type: map_at_100
value: 18.273
- type: map_at_1000
value: 18.45
- type: map_at_3
value: 14.141
- type: map_at_5
value: 15.455
- type: mrr_at_1
value: 22.15
- type: mrr_at_10
value: 32.062000000000005
- type: mrr_at_100
value: 33.116
- type: mrr_at_1000
value: 33.168
- type: mrr_at_3
value: 28.827
- type: mrr_at_5
value: 30.892999999999997
- type: ndcg_at_1
value: 22.15
- type: ndcg_at_10
value: 23.532
- type: ndcg_at_100
value: 30.358
- type: ndcg_at_1000
value: 33.783
- type: ndcg_at_3
value: 19.222
- type: ndcg_at_5
value: 20.919999999999998
- type: precision_at_1
value: 22.15
- type: precision_at_10
value: 7.185999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 13.941
- type: precision_at_5
value: 10.906
- type: recall_at_1
value: 10.181999999999999
- type: recall_at_10
value: 28.104000000000003
- type: recall_at_100
value: 51.998999999999995
- type: recall_at_1000
value: 71.311
- type: recall_at_3
value: 17.698
- type: recall_at_5
value: 22.262999999999998
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.669
- type: map_at_10
value: 15.552
- type: map_at_100
value: 21.865000000000002
- type: map_at_1000
value: 23.268
- type: map_at_3
value: 11.309
- type: map_at_5
value: 13.084000000000001
- type: mrr_at_1
value: 55.50000000000001
- type: mrr_at_10
value: 66.46600000000001
- type: mrr_at_100
value: 66.944
- type: mrr_at_1000
value: 66.956
- type: mrr_at_3
value: 64.542
- type: mrr_at_5
value: 65.717
- type: ndcg_at_1
value: 44.75
- type: ndcg_at_10
value: 35.049
- type: ndcg_at_100
value: 39.073
- type: ndcg_at_1000
value: 46.208
- type: ndcg_at_3
value: 39.525
- type: ndcg_at_5
value: 37.156
- type: precision_at_1
value: 55.50000000000001
- type: precision_at_10
value: 27.800000000000004
- type: precision_at_100
value: 9.013
- type: precision_at_1000
value: 1.8800000000000001
- type: precision_at_3
value: 42.667
- type: precision_at_5
value: 36.0
- type: recall_at_1
value: 6.669
- type: recall_at_10
value: 21.811
- type: recall_at_100
value: 45.112
- type: recall_at_1000
value: 67.806
- type: recall_at_3
value: 13.373
- type: recall_at_5
value: 16.615
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.769999999999996
- type: f1
value: 42.91448356376592
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.013
- type: map_at_10
value: 66.239
- type: map_at_100
value: 66.62599999999999
- type: map_at_1000
value: 66.644
- type: map_at_3
value: 63.965
- type: map_at_5
value: 65.45400000000001
- type: mrr_at_1
value: 58.221000000000004
- type: mrr_at_10
value: 70.43700000000001
- type: mrr_at_100
value: 70.744
- type: mrr_at_1000
value: 70.75099999999999
- type: mrr_at_3
value: 68.284
- type: mrr_at_5
value: 69.721
- type: ndcg_at_1
value: 58.221000000000004
- type: ndcg_at_10
value: 72.327
- type: ndcg_at_100
value: 73.953
- type: ndcg_at_1000
value: 74.312
- type: ndcg_at_3
value: 68.062
- type: ndcg_at_5
value: 70.56400000000001
- type: precision_at_1
value: 58.221000000000004
- type: precision_at_10
value: 9.521
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.348
- type: precision_at_5
value: 17.794999999999998
- type: recall_at_1
value: 54.013
- type: recall_at_10
value: 86.957
- type: recall_at_100
value: 93.911
- type: recall_at_1000
value: 96.38
- type: recall_at_3
value: 75.555
- type: recall_at_5
value: 81.671
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.254
- type: map_at_10
value: 33.723
- type: map_at_100
value: 35.574
- type: map_at_1000
value: 35.730000000000004
- type: map_at_3
value: 29.473
- type: map_at_5
value: 31.543
- type: mrr_at_1
value: 41.358
- type: mrr_at_10
value: 49.498
- type: mrr_at_100
value: 50.275999999999996
- type: mrr_at_1000
value: 50.308
- type: mrr_at_3
value: 47.016000000000005
- type: mrr_at_5
value: 48.336
- type: ndcg_at_1
value: 41.358
- type: ndcg_at_10
value: 41.579
- type: ndcg_at_100
value: 48.455
- type: ndcg_at_1000
value: 51.165000000000006
- type: ndcg_at_3
value: 37.681
- type: ndcg_at_5
value: 38.49
- type: precision_at_1
value: 41.358
- type: precision_at_10
value: 11.543000000000001
- type: precision_at_100
value: 1.87
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.743000000000002
- type: precision_at_5
value: 17.994
- type: recall_at_1
value: 21.254
- type: recall_at_10
value: 48.698
- type: recall_at_100
value: 74.588
- type: recall_at_1000
value: 91.00200000000001
- type: recall_at_3
value: 33.939
- type: recall_at_5
value: 39.367000000000004
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.922
- type: map_at_10
value: 52.32599999999999
- type: map_at_100
value: 53.18000000000001
- type: map_at_1000
value: 53.245
- type: map_at_3
value: 49.294
- type: map_at_5
value: 51.202999999999996
- type: mrr_at_1
value: 71.843
- type: mrr_at_10
value: 78.24600000000001
- type: mrr_at_100
value: 78.515
- type: mrr_at_1000
value: 78.527
- type: mrr_at_3
value: 77.17500000000001
- type: mrr_at_5
value: 77.852
- type: ndcg_at_1
value: 71.843
- type: ndcg_at_10
value: 61.379
- type: ndcg_at_100
value: 64.535
- type: ndcg_at_1000
value: 65.888
- type: ndcg_at_3
value: 56.958
- type: ndcg_at_5
value: 59.434
- type: precision_at_1
value: 71.843
- type: precision_at_10
value: 12.686
- type: precision_at_100
value: 1.517
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 35.778
- type: precision_at_5
value: 23.422
- type: recall_at_1
value: 35.922
- type: recall_at_10
value: 63.43
- type: recall_at_100
value: 75.868
- type: recall_at_1000
value: 84.88900000000001
- type: recall_at_3
value: 53.666000000000004
- type: recall_at_5
value: 58.555
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 79.4408
- type: ap
value: 73.52820871620366
- type: f1
value: 79.36240238685001
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.826999999999998
- type: map_at_10
value: 34.04
- type: map_at_100
value: 35.226
- type: map_at_1000
value: 35.275
- type: map_at_3
value: 30.165999999999997
- type: map_at_5
value: 32.318000000000005
- type: mrr_at_1
value: 22.464000000000002
- type: mrr_at_10
value: 34.631
- type: mrr_at_100
value: 35.752
- type: mrr_at_1000
value: 35.795
- type: mrr_at_3
value: 30.798
- type: mrr_at_5
value: 32.946999999999996
- type: ndcg_at_1
value: 22.464000000000002
- type: ndcg_at_10
value: 40.919
- type: ndcg_at_100
value: 46.632
- type: ndcg_at_1000
value: 47.833
- type: ndcg_at_3
value: 32.992
- type: ndcg_at_5
value: 36.834
- type: precision_at_1
value: 22.464000000000002
- type: precision_at_10
value: 6.494
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.021
- type: precision_at_5
value: 10.347000000000001
- type: recall_at_1
value: 21.826999999999998
- type: recall_at_10
value: 62.132
- type: recall_at_100
value: 88.55199999999999
- type: recall_at_1000
value: 97.707
- type: recall_at_3
value: 40.541
- type: recall_at_5
value: 49.739
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.68399452804377
- type: f1
value: 95.25490609832268
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 83.15321477428182
- type: f1
value: 60.35476439087966
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.92669804976462
- type: f1
value: 69.22815107207565
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4855413584398
- type: f1
value: 72.92107516103387
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.412679360205544
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.09211869875204
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.540919056982545
- type: mrr
value: 31.529904607063536
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.745
- type: map_at_10
value: 12.013
- type: map_at_100
value: 15.040000000000001
- type: map_at_1000
value: 16.427
- type: map_at_3
value: 8.841000000000001
- type: map_at_5
value: 10.289
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 53.483999999999995
- type: mrr_at_100
value: 54.20700000000001
- type: mrr_at_1000
value: 54.252
- type: mrr_at_3
value: 51.29
- type: mrr_at_5
value: 52.73
- type: ndcg_at_1
value: 43.808
- type: ndcg_at_10
value: 32.445
- type: ndcg_at_100
value: 30.031000000000002
- type: ndcg_at_1000
value: 39.007
- type: ndcg_at_3
value: 37.204
- type: ndcg_at_5
value: 35.07
- type: precision_at_1
value: 45.201
- type: precision_at_10
value: 23.684
- type: precision_at_100
value: 7.600999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 33.953
- type: precision_at_5
value: 29.412
- type: recall_at_1
value: 5.745
- type: recall_at_10
value: 16.168
- type: recall_at_100
value: 30.875999999999998
- type: recall_at_1000
value: 62.686
- type: recall_at_3
value: 9.75
- type: recall_at_5
value: 12.413
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.828
- type: map_at_10
value: 53.239000000000004
- type: map_at_100
value: 54.035999999999994
- type: map_at_1000
value: 54.067
- type: map_at_3
value: 49.289
- type: map_at_5
value: 51.784
- type: mrr_at_1
value: 42.497
- type: mrr_at_10
value: 55.916999999999994
- type: mrr_at_100
value: 56.495
- type: mrr_at_1000
value: 56.516999999999996
- type: mrr_at_3
value: 52.800000000000004
- type: mrr_at_5
value: 54.722
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 60.437
- type: ndcg_at_100
value: 63.731
- type: ndcg_at_1000
value: 64.41799999999999
- type: ndcg_at_3
value: 53.230999999999995
- type: ndcg_at_5
value: 57.26
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.47
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.724999999999998
- type: precision_at_5
value: 16.593
- type: recall_at_1
value: 37.828
- type: recall_at_10
value: 79.538
- type: recall_at_100
value: 93.646
- type: recall_at_1000
value: 98.72999999999999
- type: recall_at_3
value: 61.134
- type: recall_at_5
value: 70.377
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.548
- type: map_at_10
value: 84.466
- type: map_at_100
value: 85.10600000000001
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 81.57600000000001
- type: map_at_5
value: 83.399
- type: mrr_at_1
value: 81.24
- type: mrr_at_10
value: 87.457
- type: mrr_at_100
value: 87.574
- type: mrr_at_1000
value: 87.575
- type: mrr_at_3
value: 86.507
- type: mrr_at_5
value: 87.205
- type: ndcg_at_1
value: 81.25
- type: ndcg_at_10
value: 88.203
- type: ndcg_at_100
value: 89.457
- type: ndcg_at_1000
value: 89.563
- type: ndcg_at_3
value: 85.465
- type: ndcg_at_5
value: 87.007
- type: precision_at_1
value: 81.25
- type: precision_at_10
value: 13.373
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.417
- type: precision_at_5
value: 24.556
- type: recall_at_1
value: 70.548
- type: recall_at_10
value: 95.208
- type: recall_at_100
value: 99.514
- type: recall_at_1000
value: 99.988
- type: recall_at_3
value: 87.214
- type: recall_at_5
value: 91.696
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 53.04822095496839
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.30778476474675
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 11.766
- type: map_at_100
value: 13.904
- type: map_at_1000
value: 14.216999999999999
- type: map_at_3
value: 8.245
- type: map_at_5
value: 9.92
- type: mrr_at_1
value: 23.0
- type: mrr_at_10
value: 33.78
- type: mrr_at_100
value: 34.922
- type: mrr_at_1000
value: 34.973
- type: mrr_at_3
value: 30.2
- type: mrr_at_5
value: 32.565
- type: ndcg_at_1
value: 23.0
- type: ndcg_at_10
value: 19.863
- type: ndcg_at_100
value: 28.141
- type: ndcg_at_1000
value: 33.549
- type: ndcg_at_3
value: 18.434
- type: ndcg_at_5
value: 16.384
- type: precision_at_1
value: 23.0
- type: precision_at_10
value: 10.39
- type: precision_at_100
value: 2.235
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 21.025
- type: recall_at_100
value: 45.324999999999996
- type: recall_at_1000
value: 71.675
- type: recall_at_3
value: 10.440000000000001
- type: recall_at_5
value: 14.64
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.96178184892842
- type: cos_sim_spearman
value: 79.6487740813199
- type: euclidean_pearson
value: 82.06661161625023
- type: euclidean_spearman
value: 79.64876769031183
- type: manhattan_pearson
value: 82.07061164575131
- type: manhattan_spearman
value: 79.65197039464537
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.15305604100027
- type: cos_sim_spearman
value: 74.27447427941591
- type: euclidean_pearson
value: 80.52737337565307
- type: euclidean_spearman
value: 74.27416077132192
- type: manhattan_pearson
value: 80.53728571140387
- type: manhattan_spearman
value: 74.28853605753457
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.44386080639279
- type: cos_sim_spearman
value: 84.17947648159536
- type: euclidean_pearson
value: 83.34145388129387
- type: euclidean_spearman
value: 84.17947648159536
- type: manhattan_pearson
value: 83.30699061927966
- type: manhattan_spearman
value: 84.18125737380451
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.57392220985612
- type: cos_sim_spearman
value: 78.80745014464101
- type: euclidean_pearson
value: 80.01660371487199
- type: euclidean_spearman
value: 78.80741240102256
- type: manhattan_pearson
value: 79.96810779507953
- type: manhattan_spearman
value: 78.75600400119448
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.85421063026625
- type: cos_sim_spearman
value: 87.55320285299192
- type: euclidean_pearson
value: 86.69750143323517
- type: euclidean_spearman
value: 87.55320284326378
- type: manhattan_pearson
value: 86.63379169960379
- type: manhattan_spearman
value: 87.4815029877984
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.31314130411842
- type: cos_sim_spearman
value: 85.3489588181433
- type: euclidean_pearson
value: 84.13240933463535
- type: euclidean_spearman
value: 85.34902871403281
- type: manhattan_pearson
value: 84.01183086503559
- type: manhattan_spearman
value: 85.19316703166102
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.09979781689536
- type: cos_sim_spearman
value: 88.87813323759015
- type: euclidean_pearson
value: 88.65413031123792
- type: euclidean_spearman
value: 88.87813323759015
- type: manhattan_pearson
value: 88.61818758256024
- type: manhattan_spearman
value: 88.81044100494604
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30693258111531
- type: cos_sim_spearman
value: 62.195516523251946
- type: euclidean_pearson
value: 62.951283701049476
- type: euclidean_spearman
value: 62.195516523251946
- type: manhattan_pearson
value: 63.068322281439535
- type: manhattan_spearman
value: 62.10621171028406
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.27092833763909
- type: cos_sim_spearman
value: 84.84429717949759
- type: euclidean_pearson
value: 84.8516966060792
- type: euclidean_spearman
value: 84.84429717949759
- type: manhattan_pearson
value: 84.82203139242881
- type: manhattan_spearman
value: 84.8358503952945
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.10290863981409
- type: mrr
value: 95.31168450286097
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.161
- type: map_at_10
value: 62.138000000000005
- type: map_at_100
value: 62.769
- type: map_at_1000
value: 62.812
- type: map_at_3
value: 59.111000000000004
- type: map_at_5
value: 60.995999999999995
- type: mrr_at_1
value: 55.333
- type: mrr_at_10
value: 63.504000000000005
- type: mrr_at_100
value: 64.036
- type: mrr_at_1000
value: 64.08
- type: mrr_at_3
value: 61.278
- type: mrr_at_5
value: 62.778
- type: ndcg_at_1
value: 55.333
- type: ndcg_at_10
value: 66.678
- type: ndcg_at_100
value: 69.415
- type: ndcg_at_1000
value: 70.453
- type: ndcg_at_3
value: 61.755
- type: ndcg_at_5
value: 64.546
- type: precision_at_1
value: 55.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 52.161
- type: recall_at_10
value: 79.156
- type: recall_at_100
value: 91.333
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 66.43299999999999
- type: recall_at_5
value: 73.272
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81287128712871
- type: cos_sim_ap
value: 95.30034785910676
- type: cos_sim_f1
value: 90.28629856850716
- type: cos_sim_precision
value: 92.36401673640168
- type: cos_sim_recall
value: 88.3
- type: dot_accuracy
value: 99.81287128712871
- type: dot_ap
value: 95.30034785910676
- type: dot_f1
value: 90.28629856850716
- type: dot_precision
value: 92.36401673640168
- type: dot_recall
value: 88.3
- type: euclidean_accuracy
value: 99.81287128712871
- type: euclidean_ap
value: 95.30034785910676
- type: euclidean_f1
value: 90.28629856850716
- type: euclidean_precision
value: 92.36401673640168
- type: euclidean_recall
value: 88.3
- type: manhattan_accuracy
value: 99.80990099009901
- type: manhattan_ap
value: 95.26880751950654
- type: manhattan_f1
value: 90.22177419354838
- type: manhattan_precision
value: 90.95528455284553
- type: manhattan_recall
value: 89.5
- type: max_accuracy
value: 99.81287128712871
- type: max_ap
value: 95.30034785910676
- type: max_f1
value: 90.28629856850716
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.518662504351184
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96168178378587
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.04862593471896
- type: mrr
value: 52.97238402936932
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.092545236479946
- type: cos_sim_spearman
value: 31.599851000175498
- type: dot_pearson
value: 30.092542723901676
- type: dot_spearman
value: 31.599851000175498
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.189
- type: map_at_10
value: 1.662
- type: map_at_100
value: 9.384
- type: map_at_1000
value: 22.669
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.9039999999999999
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 81.01899999999999
- type: mrr_at_100
value: 81.01899999999999
- type: mrr_at_1000
value: 81.01899999999999
- type: mrr_at_3
value: 79.333
- type: mrr_at_5
value: 80.733
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 65.913
- type: ndcg_at_100
value: 51.895
- type: ndcg_at_1000
value: 46.967
- type: ndcg_at_3
value: 65.49199999999999
- type: ndcg_at_5
value: 66.69699999999999
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 71.6
- type: precision_at_100
value: 53.66
- type: precision_at_1000
value: 21.124000000000002
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.189
- type: recall_at_10
value: 1.913
- type: recall_at_100
value: 12.601999999999999
- type: recall_at_1000
value: 44.296
- type: recall_at_3
value: 0.605
- type: recall_at_5
value: 1.018
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.701
- type: map_at_10
value: 10.445
- type: map_at_100
value: 17.324
- type: map_at_1000
value: 19.161
- type: map_at_3
value: 5.497
- type: map_at_5
value: 7.278
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 45.534
- type: mrr_at_100
value: 45.792
- type: mrr_at_1000
value: 45.806999999999995
- type: mrr_at_3
value: 37.755
- type: mrr_at_5
value: 43.469
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 26.235000000000003
- type: ndcg_at_100
value: 39.17
- type: ndcg_at_1000
value: 51.038
- type: ndcg_at_3
value: 23.625
- type: ndcg_at_5
value: 24.338
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 24.285999999999998
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.6179999999999999
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 2.701
- type: recall_at_10
value: 17.997
- type: recall_at_100
value: 51.766999999999996
- type: recall_at_1000
value: 87.863
- type: recall_at_3
value: 6.295000000000001
- type: recall_at_5
value: 9.993
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 73.3474
- type: ap
value: 15.393431414459924
- type: f1
value: 56.466681887882416
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.062818336163
- type: f1
value: 62.11230840463252
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.464892820845115
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.15962329379508
- type: cos_sim_ap
value: 74.73674057919256
- type: cos_sim_f1
value: 68.81245642574947
- type: cos_sim_precision
value: 61.48255813953488
- type: cos_sim_recall
value: 78.12664907651715
- type: dot_accuracy
value: 86.15962329379508
- type: dot_ap
value: 74.7367634988281
- type: dot_f1
value: 68.81245642574947
- type: dot_precision
value: 61.48255813953488
- type: dot_recall
value: 78.12664907651715
- type: euclidean_accuracy
value: 86.15962329379508
- type: euclidean_ap
value: 74.7367761466634
- type: euclidean_f1
value: 68.81245642574947
- type: euclidean_precision
value: 61.48255813953488
- type: euclidean_recall
value: 78.12664907651715
- type: manhattan_accuracy
value: 86.21326816474935
- type: manhattan_ap
value: 74.64416473733951
- type: manhattan_f1
value: 68.80924855491331
- type: manhattan_precision
value: 61.23456790123457
- type: manhattan_recall
value: 78.52242744063325
- type: max_accuracy
value: 86.21326816474935
- type: max_ap
value: 74.7367761466634
- type: max_f1
value: 68.81245642574947
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.97620988085536
- type: cos_sim_ap
value: 86.08680845745758
- type: cos_sim_f1
value: 78.02793637114438
- type: cos_sim_precision
value: 73.11082699683736
- type: cos_sim_recall
value: 83.65414228518632
- type: dot_accuracy
value: 88.97620988085536
- type: dot_ap
value: 86.08681149437946
- type: dot_f1
value: 78.02793637114438
- type: dot_precision
value: 73.11082699683736
- type: dot_recall
value: 83.65414228518632
- type: euclidean_accuracy
value: 88.97620988085536
- type: euclidean_ap
value: 86.08681215460771
- type: euclidean_f1
value: 78.02793637114438
- type: euclidean_precision
value: 73.11082699683736
- type: euclidean_recall
value: 83.65414228518632
- type: manhattan_accuracy
value: 88.88888888888889
- type: manhattan_ap
value: 86.02916327562438
- type: manhattan_f1
value: 78.02063045516843
- type: manhattan_precision
value: 73.38851947346994
- type: manhattan_recall
value: 83.2768709578072
- type: max_accuracy
value: 88.97620988085536
- type: max_ap
value: 86.08681215460771
- type: max_f1
value: 78.02793637114438
---
# jina-embeddings-v2-base-en-GGUF
**Model creator**: [jinaai](https://huggingface.co/jinaai)<br/>
**Original model**: [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using `jina-embeddings-v2-base-en` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/).
## Intended Usage & Model Info
`jina-embeddings-v2-base-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
The backbone `jina-bert-v2-base-en` is pretrained on the C4 dataset.
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
Additionally, we provide the following embedding models:
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters **(you are here)**.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings.
- [`jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es): Spanish-English Bilingual embeddings.
## Data & Parameters
Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923)
## Usage
**<details><summary>Please apply mean pooling when integrating the model.</summary>**
<p>
### Why mean pooling?
`mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an `encode` function to deal with this.
However, if you would like to do it without using the default `encode` function:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['How is the weather today?', 'What is the current weather like today?']
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-small-en')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
You can use Jina Embedding models directly from transformers package.
```python
!pip install transformers
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-en", # switch to en/zh for English or Chinese
trust_remote_code=True
)
# control your input sequence length up to 8192
model.max_seq_length = 1024
embeddings = model.encode([
'How is the weather today?',
'What is the current weather like today?'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
## Alternatives to Using Transformers (or SentencTransformers) Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Plans
1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese.
2. Multimodal embedding models enable Multimodal RAG applications.
3. High-performt rerankers.
## Trouble Shooting
**Loading of Model Code failed**
If you forgot to pass the `trust_remote_code=True` flag when calling `AutoModel.from_pretrained` or initializing the model via the `SentenceTransformer` class, you will receive an error that the model weights could not be initialized.
This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:
```bash
Some weights of the model checkpoint at jinaai/jina-embeddings-v2-base-en were not used when initializing BertModel: ['encoder.layer.2.mlp.layernorm.weight', 'encoder.layer.3.mlp.layernorm.weight', 'encoder.layer.10.mlp.wo.bias', 'encoder.layer.5.mlp.wo.bias', 'encoder.layer.2.mlp.layernorm.bias', 'encoder.layer.1.mlp.gated_layers.weight', 'encoder.layer.5.mlp.gated_layers.weight', 'encoder.layer.8.mlp.layernorm.bias', ...
```
**User is not logged into Huggingface**
The model is only availabe under [gated access](https://huggingface.co/docs/hub/models-gated).
This means you need to be logged into huggingface load load it.
If you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:
```bash
OSError: jinaai/jina-embeddings-v2-base-en is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@misc{günther2023jina,
title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents},
author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao},
year={2023},
eprint={2310.19923},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF | featherless-ai-quants | 2024-11-01T01:59:43Z | 33 | 0 | null | [
"gguf",
"text-generation",
"base_model:SherlockAssistant/Mistral-7B-Instruct-Ukrainian",
"base_model:quantized:SherlockAssistant/Mistral-7B-Instruct-Ukrainian",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T01:45:02Z | ---
base_model: SherlockAssistant/Mistral-7B-Instruct-Ukrainian
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# SherlockAssistant/Mistral-7B-Instruct-Ukrainian GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [SherlockAssistant-Mistral-7B-Instruct-Ukrainian-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-GGUF/blob/main/SherlockAssistant-Mistral-7B-Instruct-Ukrainian-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/Faya-Expanse-8B-GGUF | mradermacher | 2024-11-01T01:57:09Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"fr",
"dataset:Svngoku/french-multilingual-reward-bench-dpo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T22:51:09Z | ---
base_model: Svngoku/Faya-Expanse-8B
datasets:
- Svngoku/french-multilingual-reward-bench-dpo
language:
- fr
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Svngoku/Faya-Expanse-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF/resolve/main/Faya-Expanse-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Faya-Expanse-8B-i1-GGUF | mradermacher | 2024-11-01T01:57:09Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T00:44:19Z | ---
base_model: Svngoku/Faya-Expanse-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Svngoku/Faya-Expanse-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Faya-Expanse-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Faya-Expanse-8B-i1-GGUF/resolve/main/Faya-Expanse-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
gldevelops/Llama-3.2-1B-Instruct-sensitivity | gldevelops | 2024-11-01T01:42:58Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T10:35:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF | featherless-ai-quants | 2024-11-01T01:34:33Z | 11 | 0 | null | [
"gguf",
"text-generation",
"base_model:CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo",
"base_model:quantized:CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T01:14:33Z | ---
base_model: CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-GGUF/blob/main/CorticalStack-neurotic-crown-clown-7b-tak-stack-dpo-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF | mradermacher | 2024-11-01T01:34:11Z | 60 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:renyiyu/chinese-alpaca-2-7b-dpo-v0.1",
"base_model:quantized:renyiyu/chinese-alpaca-2-7b-dpo-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T23:17:42Z | ---
base_model: renyiyu/chinese-alpaca-2-7b-dpo-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/renyiyu/chinese-alpaca-2-7b-dpo-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q8_0.gguf) | Q8_0 | 7.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.f16.gguf) | f16 | 14.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF | featherless-ai-quants | 2024-11-01T01:33:32Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/Hercules-3.0-Mistral-7B",
"base_model:quantized:Locutusque/Hercules-3.0-Mistral-7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T01:14:22Z | ---
base_model: Locutusque/Hercules-3.0-Mistral-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Hercules-3.0-Mistral-7B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [Locutusque-Hercules-3.0-Mistral-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [Locutusque-Hercules-3.0-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [Locutusque-Hercules-3.0-Mistral-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [Locutusque-Hercules-3.0-Mistral-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [Locutusque-Hercules-3.0-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-Hercules-3.0-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [Locutusque-Hercules-3.0-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [Locutusque-Hercules-3.0-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [Locutusque-Hercules-3.0-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [Locutusque-Hercules-3.0-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [Locutusque-Hercules-3.0-Mistral-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hercules-3.0-Mistral-7B-GGUF/blob/main/Locutusque-Hercules-3.0-Mistral-7B-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/Llama-3.2-3B-Apex-GGUF | mradermacher | 2024-11-01T01:30:10Z | 119 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Apex",
"base_model:quantized:bunnycore/Llama-3.2-3B-Apex",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T14:29:39Z | ---
base_model: bunnycore/Llama-3.2-3B-Apex
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Apex
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3.2-3B-Apex-i1-GGUF | mradermacher | 2024-11-01T01:30:08Z | 30 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Apex",
"base_model:quantized:bunnycore/Llama-3.2-3B-Apex",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T00:57:57Z | ---
base_model: bunnycore/Llama-3.2-3B-Apex
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Apex
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_M.gguf) | i1-IQ2_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q2_K.gguf) | i1-Q2_K | 1.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0.gguf) | i1-Q4_0 | 2.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q6_K.gguf) | i1-Q6_K | 3.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kort/i82 | Kort | 2024-11-01T01:26:54Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T01:23:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
autoprogrammer/CulturaX-zh-unsupervised-2 | autoprogrammer | 2024-11-01T01:18:46Z | 140 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T01:12:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaziyarPanahi/Chili_Dog_8B-GGUF | MaziyarPanahi | 2024-11-01T01:16:24Z | 34 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:FourOhFour/Chili_Dog_8B",
"base_model:quantized:FourOhFour/Chili_Dog_8B",
"region:us",
"conversational"
] | text-generation | 2024-11-01T00:47:19Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Chili_Dog_8B-GGUF
base_model: FourOhFour/Chili_Dog_8B
inference: false
model_creator: FourOhFour
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Chili_Dog_8B-GGUF](https://huggingface.co/MaziyarPanahi/Chili_Dog_8B-GGUF)
- Model creator: [FourOhFour](https://huggingface.co/FourOhFour)
- Original model: [FourOhFour/Chili_Dog_8B](https://huggingface.co/FourOhFour/Chili_Dog_8B)
## Description
[MaziyarPanahi/Chili_Dog_8B-GGUF](https://huggingface.co/MaziyarPanahi/Chili_Dog_8B-GGUF) contains GGUF format model files for [FourOhFour/Chili_Dog_8B](https://huggingface.co/FourOhFour/Chili_Dog_8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Kerneld/roberta-base-klue-ynat-classification | Kerneld | 2024-11-01T01:15:51Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-01T01:15:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glif-loradex-trainer/mesonwarrior_flux_dev_close_up_animals | glif-loradex-trainer | 2024-11-01T01:07:47Z | 17 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-01T01:07:13Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730423169560__000002000_0.jpg
text: zebra
- output:
url: samples/1730423194113__000002000_1.jpg
text: shark
- output:
url: samples/1730423218657__000002000_2.jpg
text: tiger
base_model: black-forest-labs/FLUX.1-dev
trigger: close-up shot
instance_prompt: close-up shot
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux_dev_close_up_animals
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `mesonwarrior`.
<Gallery />
## Trigger words
You should use `close-up shot` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/mesonwarrior_flux_dev_close_up_animals/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF | featherless-ai-quants | 2024-11-01T01:05:52Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8",
"base_model:quantized:ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T00:35:24Z | ---
base_model: ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/bunnycore-Cognitron-8B-GGUF | featherless-ai-quants | 2024-11-01T01:04:52Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:bunnycore/Cognitron-8B",
"base_model:quantized:bunnycore/Cognitron-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T00:40:54Z | ---
base_model: bunnycore/Cognitron-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# bunnycore/Cognitron-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [bunnycore-Cognitron-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [bunnycore-Cognitron-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [bunnycore-Cognitron-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [bunnycore-Cognitron-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [bunnycore-Cognitron-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [bunnycore-Cognitron-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [bunnycore-Cognitron-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [bunnycore-Cognitron-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [bunnycore-Cognitron-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [bunnycore-Cognitron-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [bunnycore-Cognitron-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF | mradermacher | 2024-11-01T00:59:56Z | 324 | 2 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"maldv/badger-writer-llama-3-8b",
"vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B",
"Orenguteng/Llama-3-8B-Lexi-Uncensored",
"abacusai/Llama-3-Smaug-8B",
"en",
"base_model:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"base_model:quantized:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-31T23:47:15Z | ---
base_model: ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- maldv/badger-writer-llama-3-8b
- vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
- Orenguteng/Llama-3-8B-Lexi-Uncensored
- abacusai/Llama-3-Smaug-8B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
piotrekgrl/llama381binstruct_summarize_short_merged | piotrekgrl | 2024-11-01T00:59:13Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-11-01T00:55:42Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/stabilityai_-_stablelm-2-12b-gguf | RichardErkhov | 2024-11-01T00:58:12Z | 38 | 0 | null | [
"gguf",
"arxiv:2402.17834",
"arxiv:2104.09864",
"arxiv:2204.06745",
"arxiv:1607.06450",
"arxiv:2302.05442",
"arxiv:2309.14322",
"arxiv:2305.14201",
"arxiv:2101.00027",
"arxiv:2305.06161",
"arxiv:2309.09400",
"arxiv:2206.11147",
"arxiv:1910.02054",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T20:34:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
stablelm-2-12b - GGUF
- Model creator: https://huggingface.co/stabilityai/
- Original model: https://huggingface.co/stabilityai/stablelm-2-12b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [stablelm-2-12b.Q2_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q2_K.gguf) | Q2_K | 4.38GB |
| [stablelm-2-12b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q3_K_S.gguf) | Q3_K_S | 5.05GB |
| [stablelm-2-12b.Q3_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q3_K.gguf) | Q3_K | 5.58GB |
| [stablelm-2-12b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q3_K_M.gguf) | Q3_K_M | 5.58GB |
| [stablelm-2-12b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q3_K_L.gguf) | Q3_K_L | 6.05GB |
| [stablelm-2-12b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.IQ4_XS.gguf) | IQ4_XS | 6.24GB |
| [stablelm-2-12b.Q4_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q4_0.gguf) | Q4_0 | 6.49GB |
| [stablelm-2-12b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.IQ4_NL.gguf) | IQ4_NL | 6.56GB |
| [stablelm-2-12b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q4_K_S.gguf) | Q4_K_S | 6.53GB |
| [stablelm-2-12b.Q4_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q4_K.gguf) | Q4_K | 6.86GB |
| [stablelm-2-12b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q4_K_M.gguf) | Q4_K_M | 6.86GB |
| [stablelm-2-12b.Q4_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q4_1.gguf) | Q4_1 | 7.17GB |
| [stablelm-2-12b.Q5_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q5_0.gguf) | Q5_0 | 7.84GB |
| [stablelm-2-12b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q5_K_S.gguf) | Q5_K_S | 7.84GB |
| [stablelm-2-12b.Q5_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q5_K.gguf) | Q5_K | 8.04GB |
| [stablelm-2-12b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q5_K_M.gguf) | Q5_K_M | 8.04GB |
| [stablelm-2-12b.Q5_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q5_1.gguf) | Q5_1 | 8.52GB |
| [stablelm-2-12b.Q6_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q6_K.gguf) | Q6_K | 9.28GB |
| [stablelm-2-12b.Q8_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_stablelm-2-12b-gguf/blob/main/stablelm-2-12b.Q8_0.gguf) | Q8_0 | 12.02GB |
Original model description:
---
language:
- en
- de
- es
- fr
- it
- nl
- pt
license: other
tags:
- causal-lm
datasets:
- tiiuae/falcon-refinedweb
- togethercomputer/RedPajama-Data-1T
- uonlp/CulturaX
- CarperAI/pilev2-dev
- bigcode/starcoderdata
- DataProvenanceInitiative/Commercially-Verified-Licenses
---
# `Stable LM 2 12B`
## Model Description
`Stable LM 2 12B` is a 12.1 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.
Please note: For commercial use, please refer to https://stability.ai/license.
## Usage
**NOTE**: This model requires `transformers>=4.40.0`
Get started generating text with `Stable LM 2 12B` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-12b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-2-12b",
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.70,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
### Run with Flash Attention 2 ⚡️
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-12b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-2-12b",
torch_dtype="auto",
attn_implementation="flash_attention_2",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.70,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
</details>
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Stable LM 2 12B` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: English
* **Paper**: [Stable LM 2 Technical Report](https://arxiv.org/abs/2402.17834)
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: [Stability AI Community License](https://huggingface.co/stabilityai/stablelm-2-12b/blob/main/LICENSE.md).
* **Commercial License**: to use this model commercially, please refer to https://stability.ai/license
* **Contact**: For questions and comments about the model, please email `[email protected]`
### Model Architecture
The model is a decoder-only transformer with the following architecture:
| Parameters | Hidden Size | Layers | Heads | KV Heads | Sequence Length |
|----------------|-------------|--------|-------|----------|-----------------|
| 12,143,605,760 | 5120 | 40 | 32 | 8 | 4096 |
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
* **Parallel Layers**: Parallel attention and feed-forward residual layers with a single input LayerNorm ([Wang, 2021](https://github.com/kingoflolz/mesh-transformer-jax)).
* **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) without biases. Furthermore, we apply per-head QK normalization ([Dehghani et al., 2023](https://arxiv.org/abs/2302.05442), [Wortsman et al., 2023](https://arxiv.org/abs/2309.14322)).
* **Biases**: We remove all bias terms from the feed-forward networks and grouped-query self-attention layers.
* **Tokenizer**: We use Arcade100k, a BPE tokenizer extended from OpenAI's [`tiktoken.cl100k_base`](https://github.com/openai/tiktoken). We split digits into individual tokens following findings by [Liu & Low (2023)](https://arxiv.org/abs/2305.14201).
## Training
### Training Dataset
The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)). We further supplement our training with multi-lingual data from CulturaX ([Nguyen et al., 2023](https://arxiv.org/abs/2309.09400)) and, in particular, from its OSCAR corpora, as well as restructured data in the style of [Yuan & Liu (2022)](https://arxiv.org/abs/2206.11147).
* Given the large amount of web data, we recommend fine-tuning the base `Stable LM 2 12B` for your downstream tasks.
### Training Procedure
The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW, and trained using the Arcade100k tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's [GitHub repository - config*](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-2-12b.yml).
### Training Infrastructure
* **Hardware**: `Stable LM 2 12B` was trained on the Stability AI cluster across 384 NVIDIA H100 GPUs (AWS P5 instances).
* **Software**: We use a fork of `gpt-neox` ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications. For commercial use, please refer to https://stability.ai/membership.
### Limitations and Bias
As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
## How to Cite
```bibtex
@article{bellagente2024stable,
title={Stable LM 2 1.6 B Technical Report},
author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others},
journal={arXiv preprint arXiv:2402.17834},
year={2024}
}
```
|
RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf | RichardErkhov | 2024-11-01T00:53:39Z | 7 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T20:32:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Starcannon-Unleashed-12B-v1.0 - GGUF
- Model creator: https://huggingface.co/VongolaChouko/
- Original model: https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Starcannon-Unleashed-12B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q2_K.gguf) | Q2_K | 4.46GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K.gguf) | Q3_K | 5.67GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [Starcannon-Unleashed-12B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_0.gguf) | Q4_0 | 6.59GB |
| [Starcannon-Unleashed-12B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_K.gguf) | Q4_K | 6.96GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_1.gguf) | Q4_1 | 7.26GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_0.gguf) | Q5_0 | 7.93GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_K.gguf) | Q5_K | 8.13GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_1.gguf) | Q5_1 | 8.61GB |
| [Starcannon-Unleashed-12B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q6_K.gguf) | Q6_K | 9.37GB |
| [Starcannon-Unleashed-12B-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
base_model:
- nothingiisreal/MN-12B-Starcannon-v3
- MarinaraSpaghetti/NemoMix-Unleashed-12B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---

Starcannon-Unleashed-12B-v1.0-GGUF
==================================
## Quantized
**GGUF:**
[VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF)
[mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF)
[bartowski/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/bartowski/Starcannon-Unleashed-12B-v1.0-GGUF)
HUGE THANKS TO [mradermacher](https://huggingface.co/mradermacher)!! ( ´•̥̥̥o•̥̥̥`)♡(˘̩̩̩̩̩̩ ⌂ ˘̩̩̩̩̩̩) Gosh dang, the fella is fast, I was shook! XD, and to the GOAT, the awesome [bartowski](https://huggingface.co/bartowski)! For their GGUF quantizations.
I was only able to test the model using Q6_K with 24576 context at most due to PC limitations, so please let me know how it fared for you. Hopefully it still works well with higher context!
Recommended settings are here: [**Settings**](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0#instruct)
## Sample Output

## Introduction
**WARNING: Ramblings incoming. Please continue scrolling down if you wish to skip the boring part ʱªʱªʱª(ᕑᗢूᓫ∗)**
Ohh boi, here we are! I'm very happy to share with you the result of countless hours bashing my head on the wall! *:・゚✧(=ఠ్ఠܫఠ్ఠ =)∫
To start up, I want to put a disclaimer. This is the first time I'm attempting to merge a model and I'm in no way an expert when it comes to coding. AT ALL. I believe I didn't understand what on earth I was looking at for like 70% of the time... Err, so there's that! I did test this model out rigorously after executing the merging codes, and so far I loved the results. I was honestly expecting the merge to absolutely fail and be totally incoherent, but thankfully not! The two days of not getting enough sleep is worth it ◝(˃̣̣̥▽˂̣̣̥)/
My goal was to hopefully create something that will get the best parts from each finetune/merge, where one model can cover for the other's weak points.
I am a VERY huge fan of [Starcannon v3](https://huggingface.co/nothingiisreal/MN-12B-Starcannon-v3) because of how in character its responses are. It just hits different. It's like the model is the character itself, not ACTING as the character. That's why it always feels sad whenever it starts deteriorating, like I'm observing my beloved character die. No matter what adjustment I did to the context, it won't stay coherent to reach 16K context. On the other hand, I love [NemoMix Unleashed](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B) for its awesome stability at much longer contexts and its nature to progress the story forward even without prompting. It feels nice that it can stay coherent and stable even after reaching past the context size I set. I also find its ability to read between the lines great. So I figured, why not just marry the two to get the best of both worlds?
I would honestly love to do this again if I can because there's one too many times I found something I like in another model and then on another and wished so desperately they would just marry each other and have kids! XD
So please let me know how it fared for my first attempt!
I also want to learn how to finetune myself in addition to merging, but I don't think my PC is capable enough to endure it. I think it almost croaked on me when I did this merge, and my SDD cried, so maybe I'll just do it some other time when I have free time and more resources to spend.
And thus, I was finally able to merge my favorite models after hours of research, tutorials, asking annoying questions to the community (that no one replied to (´;︵;`)), and coding hell. Here we are!
**°˖✧It's all ABSOLUTELY worth it!✧˖°**
## Instruct
Both ChatML and Mistral should work fine. Personally, I tested this using ChatML. I found that I like the model's responses better when I use this format. Try to test it out and observe which one you like best. :D
## Settings
I recommend using these settings:
[Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0/blob/main/Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json)
**IMPORTANT: Open Silly Tavern and use "Master Import", which can be found under "A" tab — Advanced Formatting. Replace the "INSERT WORLD HERE" placeholders with the world/universe in which your character belongs to. If not applicable, just remove that part.**

Temperature 1.15 - 1.25 is good, but lower should also work well, as long as you also tweak the Min P and XTC to ensure the model won't choke. Play around with it to see what suits your taste.
This is a modified version of MarinaraSpaghetti's Mistral-Small-Correct.json, transformed into ChatML.
You can find the original version here: [MarinaraSpaghetti/SillyTavern-Settings](https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized)
## Tips
- Examples of Dialogue and First Message are very important. The model will copy the style you wrote in these sections. So for example, if you want short outputs, make Examples of Dialogue and First Message short, and if you want longer outputs, make sure your examples have full paragraphs, composed of several sentences.
- If your Examples of Dialogue and First Message are short/concise but the model still rambles, lower Temperature in small increments, but keep Min P and XTC as is first. Test the result and adjust them to your liking. If it still rambles make XTC Threshold higher.
- Utilize Author's Note In-chat @ Depth 2 as System if you want the instruction to have greater impact on the next response. If you want something exciting and spontaneous, you can try out this note I used when I tested out the model: "Scenario: Spontaneous. {{char}} has full autonomy to do anything they wish and progress the interaction in any way they like."
## Credits
A very huge thank you to [MarinaraSpaghetti](https://huggingface.co/MarinaraSpaghetti) and [Nothing is Real](https://huggingface.co/nothingiisreal)!! (灬^ω^灬)ノ~ ♡ (´。• ᵕ •。`) ♡
I really fell in love with your models and it inspired me to learn how to make this one, and boi was it worth it! °˖✧◝(TT▿TT)◜✧˖°
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the della_linear merge method using G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B as a base.
### Models Merged
The following models were included in the merge:
* G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
dtype: bfloat16
merge_method: della_linear
parameters:
epsilon: 0.05
int8_mask: 1.0
lambda: 1.0
slices:
- sources:
- layer_range: [0, 40]
model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
parameters:
density: 0.65
weight: 0.4
- layer_range: [0, 40]
model: G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
parameters:
density: 0.55
weight: 0.6
```
|
glif-loradex-trainer/x_bulbul_x_windows_95_UI | glif-loradex-trainer | 2024-11-01T00:47:41Z | 34 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-01T00:46:50Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730421901473__000003000_0.jpg
text: wounded centaur, mythical creature, windows 95
- output:
url: samples/1730421925134__000003000_1.jpg
text: ruins of athens, snake, windows 95
- output:
url: samples/1730421948621__000003000_2.jpg
text: silver vampire sword, windows 95
- output:
url: samples/1730421972112__000003000_3.jpg
text: mspaint with starry night, windows 95
- output:
url: samples/1730421995723__000003000_4.jpg
text: sonic game, windows 95
base_model: black-forest-labs/FLUX.1-dev
trigger: windows 95
instance_prompt: windows 95
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# windows_95_UI
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `x_bulbul_x`.
<Gallery />
## Trigger words
You should use `windows 95` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/x_bulbul_x_windows_95_UI/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16 | cloudyu | 2024-11-01T00:46:34Z | 4,292 | 15 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-03T13:23:22Z | ---
tags:
- yi
- moe
license: apache-2.0
---
this is a DPO fine-tuned MoE model for [TomGrc/FusionNet_34Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE_v0.1)
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
Metrics
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg)
Metrics
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |77.91|
|AI2 Reasoning Challenge (25-Shot)|74.06|
|HellaSwag (10-Shot) |86.74|
|MMLU (5-Shot) |76.65|
|TruthfulQA (0-shot) |72.24|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |74.45|
|
mradermacher/Qwen2.5-7B-task2-i1-GGUF | mradermacher | 2024-11-01T00:42:12Z | 23 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Qwen2.5-7B-task2",
"base_model:quantized:allknowingroger/Qwen2.5-7B-task2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T00:28:11Z | ---
base_model: allknowingroger/Qwen2.5-7B-task2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allknowingroger/Qwen2.5-7B-task2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MaziyarPanahi/IceMartiniV1RP-7b-GGUF | MaziyarPanahi | 2024-11-01T00:28:04Z | 37 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:icefog72/IceMartiniV1RP-7b",
"base_model:quantized:icefog72/IceMartiniV1RP-7b",
"region:us",
"conversational"
] | text-generation | 2024-11-01T00:05:39Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: IceMartiniV1RP-7b-GGUF
base_model: icefog72/IceMartiniV1RP-7b
inference: false
model_creator: icefog72
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/IceMartiniV1RP-7b-GGUF](https://huggingface.co/MaziyarPanahi/IceMartiniV1RP-7b-GGUF)
- Model creator: [icefog72](https://huggingface.co/icefog72)
- Original model: [icefog72/IceMartiniV1RP-7b](https://huggingface.co/icefog72/IceMartiniV1RP-7b)
## Description
[MaziyarPanahi/IceMartiniV1RP-7b-GGUF](https://huggingface.co/MaziyarPanahi/IceMartiniV1RP-7b-GGUF) contains GGUF format model files for [icefog72/IceMartiniV1RP-7b](https://huggingface.co/icefog72/IceMartiniV1RP-7b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF | featherless-ai-quants | 2024-11-01T00:26:18Z | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:nbeerbower/Lyra4-Gutenberg2-12B",
"base_model:quantized:nbeerbower/Lyra4-Gutenberg2-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-31T23:40:19Z | ---
base_model: nbeerbower/Lyra4-Gutenberg2-12B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/Lyra4-Gutenberg2-12B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [nbeerbower-Lyra4-Gutenberg2-12B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q8_0.gguf) | 12419.10 MB |
| Q4_K_S | [nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_S.gguf) | 6790.35 MB |
| Q2_K | [nbeerbower-Lyra4-Gutenberg2-12B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q2_K.gguf) | 4569.10 MB |
| Q6_K | [nbeerbower-Lyra4-Gutenberg2-12B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q6_K.gguf) | 9590.35 MB |
| Q3_K_M | [nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_S.gguf) | 5277.85 MB |
| Q3_K_L | [nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q3_K_L.gguf) | 6257.54 MB |
| Q4_K_M | [nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q4_K_M.gguf) | 7130.82 MB |
| Q5_K_S | [nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_S.gguf) | 8124.10 MB |
| Q5_K_M | [nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-Q5_K_M.gguf) | 8323.32 MB |
| IQ4_XS | [nbeerbower-Lyra4-Gutenberg2-12B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-Lyra4-Gutenberg2-12B-GGUF/blob/main/nbeerbower-Lyra4-Gutenberg2-12B-IQ4_XS.gguf) | 6485.04 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF | featherless-ai-quants | 2024-11-01T00:25:50Z | 19 | 0 | null | [
"gguf",
"text-generation",
"base_model:lashid11/CheckGPT-SOLAR-10.7B",
"base_model:quantized:lashid11/CheckGPT-SOLAR-10.7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T23:48:22Z | ---
base_model: lashid11/CheckGPT-SOLAR-10.7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# lashid11/CheckGPT-SOLAR-10.7B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [lashid11-CheckGPT-SOLAR-10.7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q8_0.gguf) | 10875.85 MB |
| Q4_K_S | [lashid11-CheckGPT-SOLAR-10.7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q4_K_S.gguf) | 5835.08 MB |
| Q2_K | [lashid11-CheckGPT-SOLAR-10.7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q2_K.gguf) | 3817.78 MB |
| Q6_K | [lashid11-CheckGPT-SOLAR-10.7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q6_K.gguf) | 8397.30 MB |
| Q3_K_M | [lashid11-CheckGPT-SOLAR-10.7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q3_K_M.gguf) | 4954.98 MB |
| Q3_K_S | [lashid11-CheckGPT-SOLAR-10.7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q3_K_S.gguf) | 4448.48 MB |
| Q3_K_L | [lashid11-CheckGPT-SOLAR-10.7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q3_K_L.gguf) | 5388.98 MB |
| Q4_K_M | [lashid11-CheckGPT-SOLAR-10.7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q4_K_M.gguf) | 6162.33 MB |
| Q5_K_S | [lashid11-CheckGPT-SOLAR-10.7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q5_K_S.gguf) | 7054.70 MB |
| Q5_K_M | [lashid11-CheckGPT-SOLAR-10.7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-Q5_K_M.gguf) | 7245.95 MB |
| IQ4_XS | [lashid11-CheckGPT-SOLAR-10.7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/lashid11-CheckGPT-SOLAR-10.7B-GGUF/blob/main/lashid11-CheckGPT-SOLAR-10.7B-IQ4_XS.gguf) | 5557.67 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF | featherless-ai-quants | 2024-11-01T00:23:00Z | 6 | 0 | null | [
"gguf",
"text-generation",
"base_model:netcat420/MFANNv0.20.12",
"base_model:quantized:netcat420/MFANNv0.20.12",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-31T23:50:43Z | ---
base_model: netcat420/MFANNv0.20.12
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# netcat420/MFANNv0.20.12 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [netcat420-MFANNv0.20.12-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [netcat420-MFANNv0.20.12-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [netcat420-MFANNv0.20.12-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [netcat420-MFANNv0.20.12-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [netcat420-MFANNv0.20.12-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [netcat420-MFANNv0.20.12-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [netcat420-MFANNv0.20.12-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [netcat420-MFANNv0.20.12-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [netcat420-MFANNv0.20.12-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [netcat420-MFANNv0.20.12-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [netcat420-MFANNv0.20.12-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
Xsmos/ml21cm | Xsmos | 2024-11-01T00:20:39Z | 0 | 0 | null | [
"tensorboard",
"generate 21cm lightcones",
"denoising diffusion probabilistic model",
"license:mit",
"region:us"
] | null | 2024-05-20T02:33:26Z | ---
title: "ml21cm"
tags:
- generate 21cm lightcones
- denoising diffusion probabilistic model
license: "mit"
summary: "This is a diffusion model for generating 21cm cosmology data."
---
# Model Description
This is a diffusion model for generating 21cm cosmology data.
|
mradermacher/Onii-Chan-3-55-GGUF | mradermacher | 2024-11-01T00:16:11Z | 30 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Onii-Chan-3/Onii-Chan-3-55",
"base_model:quantized:Onii-Chan-3/Onii-Chan-3-55",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T00:00:29Z | ---
base_model: Onii-Chan-3/Onii-Chan-3-55
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Onii-Chan-3/Onii-Chan-3-55
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Onii-Chan-3-55-GGUF/resolve/main/Onii-Chan-3-55.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF | featherless-ai-quants | 2024-11-01T00:09:46Z | 19 | 0 | null | [
"gguf",
"text-generation",
"base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4_prune",
"base_model:quantized:alnrg2arg/blockchainlabs_7B_merged_test2_4_prune",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-31T23:36:18Z | ---
base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4_prune
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# alnrg2arg/blockchainlabs_7B_merged_test2_4_prune GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-GGUF/blob/main/alnrg2arg-blockchainlabs_7B_merged_test2_4_prune-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/MISTRALllux1000-7b-v5-GGUF | mradermacher | 2024-11-01T00:04:08Z | 153 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:djomo/MISTRALllux1000-7b-v5",
"base_model:quantized:djomo/MISTRALllux1000-7b-v5",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T22:50:59Z | ---
base_model: djomo/MISTRALllux1000-7b-v5
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/djomo/MISTRALllux1000-7b-v5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MISTRALllux1000-7b-v5-GGUF/resolve/main/MISTRALllux1000-7b-v5.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Yntec/WesternCartoon | Yntec | 2024-11-01T00:00:07Z | 266 | 1 | diffusers | [
"diffusers",
"safetensors",
"Style",
"Disney",
"Art",
"PromptSharingSamaritan",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-15T08:58:05Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Style
- Disney
- Art
- PromptSharingSamaritan
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Western Cartoon Type A
No-ema version of this model. Samples and prompts (they both use seed 9119):

a woman with pink hair and a colorful scarf.

highquality, masterpiece, 1girl, Chi-Chi, :D, close up, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cave, rock, mountain. blue collar
Original page:
https://civitai.com/models/62060/western-cartoon-type-a |
DanJoshua/profesor_Swin3D_O_RLVS | DanJoshua | 2024-10-31T23:54:50Z | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T21:30:50Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: profesor_Swin3D_O_RLVS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# profesor_Swin3D_O_RLVS
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0660
- Accuracy: 0.9882
- F1: 0.9882
- Precision: 0.9882
- Recall: 0.9882
- Roc Auc: 0.9988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 477
- training_steps: 4770
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 0.0696 | 2.0329 | 477 | 0.0768 | 0.9732 | 0.9732 | 0.9734 | 0.9732 | 0.9980 |
| 0.0312 | 5.0323 | 954 | 0.1061 | 0.9786 | 0.9786 | 0.9788 | 0.9786 | 0.9984 |
| 0.0414 | 8.0317 | 1431 | 0.0981 | 0.9732 | 0.9732 | 0.9734 | 0.9732 | 0.9988 |
| 0.0005 | 11.0310 | 1908 | 0.0739 | 0.9866 | 0.9866 | 0.9866 | 0.9866 | 0.9986 |
| 0.0009 | 14.0304 | 2385 | 0.1144 | 0.9812 | 0.9812 | 0.9814 | 0.9812 | 0.9987 |
| 0.0015 | 17.0298 | 2862 | 0.2200 | 0.9705 | 0.9705 | 0.9712 | 0.9705 | 0.9964 |
| 0.0002 | 20.0291 | 3339 | 0.1794 | 0.9732 | 0.9732 | 0.9733 | 0.9732 | 0.9978 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
jrbeduardo/vit-model-jrbeduardo-v2 | jrbeduardo | 2024-10-31T23:50:52Z | 246 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-10-31T23:45:17Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-model-jrbeduardo-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model-jrbeduardo-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the AI-Lab-Makerere/beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0727
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1509 | 3.8462 | 500 | 0.0727 | 0.9850 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
MaziyarPanahi/MathCoder2-Mistral-7B-GGUF | MaziyarPanahi | 2024-10-31T23:48:46Z | 36 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:MathGenie/MathCoder2-Mistral-7B",
"base_model:quantized:MathGenie/MathCoder2-Mistral-7B",
"region:us"
] | text-generation | 2024-10-31T23:28:11Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: MathCoder2-Mistral-7B-GGUF
base_model: MathGenie/MathCoder2-Mistral-7B
inference: false
model_creator: MathGenie
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MathCoder2-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/MathCoder2-Mistral-7B-GGUF)
- Model creator: [MathGenie](https://huggingface.co/MathGenie)
- Original model: [MathGenie/MathCoder2-Mistral-7B](https://huggingface.co/MathGenie/MathCoder2-Mistral-7B)
## Description
[MaziyarPanahi/MathCoder2-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/MathCoder2-Mistral-7B-GGUF) contains GGUF format model files for [MathGenie/MathCoder2-Mistral-7B](https://huggingface.co/MathGenie/MathCoder2-Mistral-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
anmittal1/camera-sd3-lora-1 | anmittal1 | 2024-10-31T23:44:18Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"sd3",
"sd3-diffusers",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:openrail++",
"region:us"
] | text-to-image | 2024-10-31T17:19:15Z | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: openrail++
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- sd3
- sd3-diffusers
- template:sd-lora
instance_prompt: a photo of [V] object
widget:
- text: A photo of [V] object
output:
url: image_0.png
- text: A photo of [V] object
output:
url: image_1.png
- text: A photo of [V] object
output:
url: image_2.png
- text: A photo of [V] object
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - anmittal1/camera-sd3-lora-1
<Gallery />
## Model description
These are anmittal1/camera-sd3-lora-1 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of [V] object` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](anmittal1/camera-sd3-lora-1/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('anmittal1/camera-sd3-lora-1', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of [V] object').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/anmittal1/camera-sd3-lora-1/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
aidadev48/model16 | aidadev48 | 2024-10-31T23:39:32Z | 140 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T23:37:42Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** aidadev48
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hatemestinbejaia/mmarco-Arabic-mMiniLML-cross-encoder-NoKD-v1 | hatemestinbejaia | 2024-10-31T23:29:21Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-31T23:28:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF | featherless-ai-quants | 2024-10-31T23:22:40Z | 15 | 0 | null | [
"gguf",
"text-generation",
"base_model:Obrolin/Kesehatan-7B-v0.1",
"base_model:quantized:Obrolin/Kesehatan-7B-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-31T22:52:01Z | ---
base_model: Obrolin/Kesehatan-7B-v0.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Obrolin/Kesehatan-7B-v0.1 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [Obrolin-Kesehatan-7B-v0.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [Obrolin-Kesehatan-7B-v0.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [Obrolin-Kesehatan-7B-v0.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [Obrolin-Kesehatan-7B-v0.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [Obrolin-Kesehatan-7B-v0.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Obrolin-Kesehatan-7B-v0.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [Obrolin-Kesehatan-7B-v0.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [Obrolin-Kesehatan-7B-v0.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [Obrolin-Kesehatan-7B-v0.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [Obrolin-Kesehatan-7B-v0.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [Obrolin-Kesehatan-7B-v0.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Obrolin-Kesehatan-7B-v0.1-GGUF/blob/main/Obrolin-Kesehatan-7B-v0.1-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF | featherless-ai-quants | 2024-10-31T23:21:30Z | 12 | 0 | null | [
"gguf",
"text-generation",
"base_model:shleeeee/mistral-ko-OpenOrca-2000",
"base_model:quantized:shleeeee/mistral-ko-OpenOrca-2000",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T22:53:37Z | ---
base_model: shleeeee/mistral-ko-OpenOrca-2000
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# shleeeee/mistral-ko-OpenOrca-2000 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [shleeeee-mistral-ko-OpenOrca-2000-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [shleeeee-mistral-ko-OpenOrca-2000-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [shleeeee-mistral-ko-OpenOrca-2000-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [shleeeee-mistral-ko-OpenOrca-2000-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [shleeeee-mistral-ko-OpenOrca-2000-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [shleeeee-mistral-ko-OpenOrca-2000-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [shleeeee-mistral-ko-OpenOrca-2000-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [shleeeee-mistral-ko-OpenOrca-2000-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [shleeeee-mistral-ko-OpenOrca-2000-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [shleeeee-mistral-ko-OpenOrca-2000-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [shleeeee-mistral-ko-OpenOrca-2000-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/shleeeee-mistral-ko-OpenOrca-2000-GGUF/blob/main/shleeeee-mistral-ko-OpenOrca-2000-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF | MaziyarPanahi | 2024-10-31T23:10:29Z | 102 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"base_model:quantized:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"region:us",
"conversational"
] | text-generation | 2024-10-31T22:49:48Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Hermes2-Gutenberg2-Mistral-7B-GGUF
base_model: nbeerbower/Hermes2-Gutenberg2-Mistral-7B
inference: false
model_creator: nbeerbower
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF)
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
- Original model: [nbeerbower/Hermes2-Gutenberg2-Mistral-7B](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B)
## Description
[MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF) contains GGUF format model files for [nbeerbower/Hermes2-Gutenberg2-Mistral-7B](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
nikutd01/emotion_tweet_roberta-base_2024-10-31 | nikutd01 | 2024-10-31T23:04:43Z | 196 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-31T21:00:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AndyLiang12/bert-finetuned-ner | AndyLiang12 | 2024-10-31T23:00:17Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-24T17:20:06Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9348221670802316
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9428547593225995
- name: Accuracy
type: accuracy
value: 0.9858421145581916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
- Precision: 0.9348
- Recall: 0.9510
- F1: 0.9429
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0746 | 1.0 | 1756 | 0.0711 | 0.9006 | 0.9300 | 0.9151 | 0.9802 |
| 0.0341 | 2.0 | 3512 | 0.0687 | 0.9293 | 0.9445 | 0.9368 | 0.9845 |
| 0.0219 | 3.0 | 5268 | 0.0630 | 0.9348 | 0.9510 | 0.9429 | 0.9858 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
unsloth/SmolLM2-135M-bnb-4bit | unsloth | 2024-10-31T22:56:41Z | 1,837 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"en",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:quantized:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-31T21:29:30Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/SmolLM2-135M 4bit bitsandbytes pre-quantized
For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-135M)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
# SmolLM2
 |
unsloth/SmolLM2-135M-Instruct-bnb-4bit | unsloth | 2024-10-31T22:56:11Z | 367 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-31T21:30:48Z | ---
base_model: HuggingFaceTB/SmolLM2-135B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/SmolLM2-135M-Instruct 4bit bitsandbytes pre-quantized
For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
# SmolLM2
 |
mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF | mradermacher | 2024-10-31T22:52:09Z | 91 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:BrokenSoul/Llama-3.2-3B-Instruct-Cancer-Lung-Detection",
"base_model:quantized:BrokenSoul/Llama-3.2-3B-Instruct-Cancer-Lung-Detection",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T22:45:46Z | ---
base_model: BrokenSoul/Llama-3.2-3B-Instruct-Cancer-Lung-Detection
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BrokenSoul/Llama-3.2-3B-Instruct-Cancer-Lung-Detection
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Cancer-Lung-Detection-GGUF/resolve/main/Llama-3.2-3B-Instruct-Cancer-Lung-Detection.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
unsloth/SmolLM2-1.7B-Instruct-bnb-4bit | unsloth | 2024-10-31T22:49:35Z | 9,643 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-31T21:02:02Z | ---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/SmolLM2-1.7B-Instruct 4bit bitsandbytes pre-quantized
For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
# SmolLM2
 |
unsloth/SmolLM2-1.7B | unsloth | 2024-10-31T22:43:38Z | 8,806 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T19:12:31Z | ---
base_model: HuggingFaceTB/SmolLM2-1.7B
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/SmolLM2-1.7B
For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
# SmolLM2
 |
kywch/act_mimicgen_stack_d1 | kywch | 2024-10-31T22:40:49Z | 10 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-10-31T22:40:32Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
MaziyarPanahi/Flammades-Mistral-7B-GGUF | MaziyarPanahi | 2024-10-31T22:32:55Z | 29 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:flammenai/Flammades-Mistral-7B",
"base_model:quantized:flammenai/Flammades-Mistral-7B",
"region:us",
"conversational"
] | text-generation | 2024-10-31T22:12:03Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Flammades-Mistral-7B-GGUF
base_model: flammenai/Flammades-Mistral-7B
inference: false
model_creator: flammenai
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Flammades-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/Flammades-Mistral-7B-GGUF)
- Model creator: [flammenai](https://huggingface.co/flammenai)
- Original model: [flammenai/Flammades-Mistral-7B](https://huggingface.co/flammenai/Flammades-Mistral-7B)
## Description
[MaziyarPanahi/Flammades-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/Flammades-Mistral-7B-GGUF) contains GGUF format model files for [flammenai/Flammades-Mistral-7B](https://huggingface.co/flammenai/Flammades-Mistral-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
yjwon/mpg9_gemma9b_sft_ogd_rms_epoch3 | yjwon | 2024-10-31T22:31:18Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T22:29:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
juansebas638/keaie | juansebas638 | 2024-10-31T22:28:52Z | 27 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-31T22:28:49Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: keaie
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# keaie
<Gallery />
## Model description
## Trigger words
You should use `keaie` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/juansebas638/keaie/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
mlx-community/SmolLM2-135M-Instruct | mlx-community | 2024-10-31T22:20:24Z | 192 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T22:20:08Z | ---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- mlx
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
---
# mlx-community/SmolLM2-135M-Instruct
The Model [mlx-community/SmolLM2-135M-Instruct](https://huggingface.co/mlx-community/SmolLM2-135M-Instruct) was converted to MLX format from [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/SmolLM2-135M-Instruct")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/SmolLM2-360M-Instruct | mlx-community | 2024-10-31T22:19:30Z | 154 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T22:18:56Z | ---
library_name: transformers
license: apache-2.0
language:
- en
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
tags:
- mlx
---
# mlx-community/SmolLM2-360M-Instruct
The Model [mlx-community/SmolLM2-360M-Instruct](https://huggingface.co/mlx-community/SmolLM2-360M-Instruct) was converted to MLX format from [HuggingFaceTB/SmolLM2-360M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/SmolLM2-360M-Instruct")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/SmolLM2-1.7B-Instruct | mlx-community | 2024-10-31T22:07:43Z | 159 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T22:05:16Z | ---
library_name: transformers
license: apache-2.0
language:
- en
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- mlx
---
# mlx-community/SmolLM2-1.7B-Instruct
The Model [mlx-community/SmolLM2-1.7B-Instruct](https://huggingface.co/mlx-community/SmolLM2-1.7B-Instruct) was converted to MLX format from [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/SmolLM2-1.7B-Instruct")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ialberquilla/model-v0 | ialberquilla | 2024-10-31T22:02:10Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2b-bnb-4bit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T22:00:32Z | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
---
# Uploaded model
- **Developed by:** ialberquilla
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
glif-loradex-trainer/x_bulbul_x_90s_anime | glif-loradex-trainer | 2024-10-31T22:00:59Z | 77 | 6 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-10-31T22:00:32Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730411923117__000003000_0.jpg
text: boy running on the street, 90s anime
- output:
url: samples/1730411947731__000003000_1.jpg
text: girl fighting a monkey, 90s anime
- output:
url: samples/1730411972353__000003000_2.jpg
text: a car driving at midnight, 90s anime
- output:
url: samples/1730411996976__000003000_3.jpg
text: samurai sword, 90s anime
- output:
url: samples/1730412021601__000003000_4.jpg
text: tall building, 90s anime
base_model: black-forest-labs/FLUX.1-dev
trigger: 90s anime
instance_prompt: 90s anime
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# 90s_anime
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `x_bulbul_x`.
<Gallery />
## Trigger words
You should use `90s anime` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/x_bulbul_x_90s_anime/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF | featherless-ai-quants | 2024-10-31T21:54:49Z | 9 | 0 | null | [
"gguf",
"text-generation",
"base_model:Weyaxi/HelpSteer-filtered-Solar-Instruct",
"base_model:quantized:Weyaxi/HelpSteer-filtered-Solar-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-31T21:34:31Z | ---
base_model: Weyaxi/HelpSteer-filtered-Solar-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Weyaxi/HelpSteer-filtered-Solar-Instruct GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q8_0.gguf) | 10875.85 MB |
| Q4_K_S | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q4_K_S.gguf) | 5835.08 MB |
| Q2_K | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q2_K.gguf) | 3817.78 MB |
| Q6_K | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q6_K.gguf) | 8397.30 MB |
| Q3_K_M | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q3_K_M.gguf) | 4954.98 MB |
| Q3_K_S | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q3_K_S.gguf) | 4448.48 MB |
| Q3_K_L | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q3_K_L.gguf) | 5388.98 MB |
| Q4_K_M | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q4_K_M.gguf) | 6162.33 MB |
| Q5_K_S | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q5_K_S.gguf) | 7054.70 MB |
| Q5_K_M | [Weyaxi-HelpSteer-filtered-Solar-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-Q5_K_M.gguf) | 7245.95 MB |
| IQ4_XS | [Weyaxi-HelpSteer-filtered-Solar-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Weyaxi-HelpSteer-filtered-Solar-Instruct-GGUF/blob/main/Weyaxi-HelpSteer-filtered-Solar-Instruct-IQ4_XS.gguf) | 5557.67 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
llmware/slim-extract-qwen-0.5b-ov | llmware | 2024-10-31T21:54:31Z | 7 | 1 | null | [
"openvino",
"qwen2",
"green",
"p1",
"llmware-fx",
"ov",
"emerald",
"license:apache-2.0",
"region:us"
] | null | 2024-10-11T09:42:32Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-extract-qwen-0.5b
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov, emerald]
---
# slim-extract-qwen-0.5b-ov
**slim-extract-qwen-0.5b-ov** is a specialized function calling model with a single mission to look for values in a text, based on an "extract" key that is passed as a parameter. No other instructions are required except to pass the context passage, and the target key, and the model will generate a python dictionary consisting of the extract key and a list of the values found in the text, including an 'empty list' if the text does not provide an answer for the value of the selected key.
This is an OpenVino int4 quantized version of slim-extract-qwen-0.5b, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** qwen2-0.5b
- **Parameters:** 0.5 billion
- **Model Parent:** llmware/slim-extract-qwen-0.5b
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Extraction of values from complex business documents
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-intent-ov | llmware | 2024-10-31T21:50:22Z | 32 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"base_model:llmware/slim-intent",
"base_model:quantized:llmware/slim-intent",
"license:apache-2.0",
"region:us"
] | null | 2024-09-07T06:11:18Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-intent
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov]
---
# slim-intent-ov
**slim-intent-ov** is a specialized function calling model that generates a python dictionary with an "intent" key and a value corresponding to the intent classification.
This is an OpenVino int4 quantized version of slim-intent, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-intent
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Intent categorization
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-ratings-ov | llmware | 2024-10-31T21:49:41Z | 32 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"base_model:llmware/slim-ratings",
"base_model:quantized:llmware/slim-ratings",
"license:apache-2.0",
"region:us"
] | null | 2024-09-07T06:04:23Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-ratings
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov]
---
# slim-ratings-ov
**slim-ratings-ov** is a specialized function calling model that generates a dictionary with a 'stars' rating characterizing the sentiment/positivity of a text passage between 1 (poor) and 5 (very positive).
This is an OpenVino int4 quantized version of slim-ratings, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-ratings
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Sentiment 'stars' rating score of 1 (low) - 5 (high)
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-ner-ov | llmware | 2024-10-31T21:48:51Z | 26 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"base_model:llmware/slim-ner",
"base_model:quantized:llmware/slim-ner",
"license:apache-2.0",
"region:us"
] | null | 2024-09-07T05:40:24Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-ner
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov]
---
# slim-ner-ov
**slim-ner-ov** is a specialized function calling model that generates a python dictionary consisting of named entity types and the named entities identified in the text.
This is an OpenVino int4 quantized version of slim-ner, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-ner
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Extraction of named entity types from complex business documents
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
marklicata/M365_demo_8k | marklicata | 2024-10-31T21:48:13Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-29T23:10:14Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: M365_demo_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M365_demo_v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7505 | 1.0 | 900 | 0.1955 |
| 0.1748 | 2.0 | 1800 | 0.1680 |
| 0.1092 | 3.0 | 2700 | 0.1596 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
llmware/slim-topics-ov | llmware | 2024-10-31T21:48:02Z | 2,844 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"emerald",
"base_model:llmware/slim-topics",
"base_model:quantized:llmware/slim-topics",
"license:apache-2.0",
"region:us"
] | null | 2024-09-06T21:00:43Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-topics
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov, emerald]
---
# slim-topics-ov
**slim-topics-ov** is a specialized function calling model that generates a topic description for a text passage, typically no more than 2-3 words.
This is an OpenVino int4 quantized version of slim-topics, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-topics
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Topic categorization and summarization
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-tags-ov | llmware | 2024-10-31T21:47:14Z | 24 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"base_model:llmware/slim-tags",
"base_model:quantized:llmware/slim-tags",
"license:apache-2.0",
"region:us"
] | null | 2024-09-06T20:58:41Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-tags
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov]
---
# slim-tags-ov
**slim-tags-ov** is a specialized function calling model that generates a list of tags, e.g., 'meaningful objects', from a text passage, which is useful for summarization and various retrieval strategies.
This is an OpenVino int4 quantized version of slim-tags, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-tags
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Tag generation, summarization and search/retrieval enrichment
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-sql-ov | llmware | 2024-10-31T21:46:16Z | 56 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"emerald",
"base_model:llmware/slim-sql-1b-v0",
"base_model:quantized:llmware/slim-sql-1b-v0",
"license:apache-2.0",
"region:us"
] | null | 2024-09-07T05:33:52Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-sql-1b-v0
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov, emerald]
---
# slim-sql-ov
**slim-sql-ov** is a small specialized function calling model that takes as input a table schema and a natural language query, and outputs a SQL statement that corresponds to the query, and can be run against a database table. This is a very small text-to-sql model designed for reasonable accuracy on single tables and relatively straightforward queries, and for easy integration into multi-step processes.
This is an OpenVino int4 quantized version of slim-sql-1b-v0, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-sql-1b-v0
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Text-to-SQL conversion
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-sentiment-ov | llmware | 2024-10-31T21:44:33Z | 80 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"emerald",
"base_model:llmware/slim-sentiment",
"base_model:quantized:llmware/slim-sentiment",
"license:apache-2.0",
"region:us"
] | null | 2024-08-31T10:20:01Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-sentiment
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov, emerald]
---
# slim-sentiment-ov
**slim-sentiment-ov** is a specialized function calling model that classifies the sentiment of a given text passage and generates a python dictionary with a "sentiment" key and the corresponding value assessment of the sentiment.
This is an OpenVino int4 quantized version of slim-sentiment, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-sentiment
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Sentiment analysis for Agent-based multi-step process workflows
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
llmware/slim-summary-tiny-ov | llmware | 2024-10-31T21:44:00Z | 41 | 1 | null | [
"openvino",
"llama",
"green",
"p1",
"llmware-fx",
"ov",
"emerald",
"base_model:llmware/slim-summary-tiny",
"base_model:quantized:llmware/slim-summary-tiny",
"license:apache-2.0",
"region:us"
] | null | 2024-08-31T11:51:53Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-summary-tiny
base_model_relation: quantized
tags: [green, p1, llmware-fx, ov, emerald]
---
# slim-summary-tiny-ov
**slim-summary-tiny-ov** is a specialized function calling model that summarizes a given text and generates as output a Python list of summary points.
This is an OpenVino int4 quantized version of slim-summary-tiny, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-summary-tiny
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Summary bulletpoints extracted from complex business documents
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
NewEden/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049-Q8_0-GGUF | NewEden | 2024-10-31T21:38:53Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049",
"base_model:quantized:cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T21:38:00Z | ---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
base_model: cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049
---
# Delta-Vector/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049-Q8_0-GGUF
This model was converted to GGUF format from [`cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049`](https://huggingface.co/cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cgato/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Delta-Vector/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049-Q8_0-GGUF --hf-file nemo-12b-thespice-v0.9-all-v2-kto-v0.1-e1-2049-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Delta-Vector/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049-Q8_0-GGUF --hf-file nemo-12b-thespice-v0.9-all-v2-kto-v0.1-e1-2049-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Delta-Vector/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049-Q8_0-GGUF --hf-file nemo-12b-thespice-v0.9-all-v2-kto-v0.1-e1-2049-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Delta-Vector/Nemo-12b-TheSpice-V0.9-All-v2-KTO-v0.1-E1-2049-Q8_0-GGUF --hf-file nemo-12b-thespice-v0.9-all-v2-kto-v0.1-e1-2049-q8_0.gguf -c 2048
```
|
llmware/llama-3.2-3b-instruct-onnx | llmware | 2024-10-31T21:38:51Z | 10 | 1 | null | [
"onnx",
"llama",
"green",
"p3",
"llmware-chat",
"ov",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2024-10-26T18:56:57Z | ---
license: llama3.2
inference: false
base_model: meta-llama/Llama-3.2-1B-Instruct
base_model_relation: quantized
tags:
- green
- p3
- llmware-chat
- ov
---
# llama-3.2-3b-instruct-onnx
**llama-3.2-3b-instruct-onnx** is an ONNX int4 quantized version of Llama 3.2 3B Instruct, providing a very small, very fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
[**llama-3.2-3b-instruct**](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) is a new 3B chat foundation model from Meta.
### Model Description
- **Developed by:** meta-llama
- **Quantized by:** llmware
- **Model type:** llama-3.2
- **Parameters:** 3 billion
- **Model Parent:** meta-llama/Meta-Llama-3.2-1B-Instruct
- **Language(s) (NLP):** English
- **License:** Llama 3.2 Community License
- **Uses:** General chat use cases
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai) |
nlpguy/smolchess | nlpguy | 2024-10-31T21:38:12Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T21:34:42Z | ---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- generated_from_trainer
model-index:
- name: smolchess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolchess
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use grokadamw with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 0.25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4847 | 0.0025 | 4 | 1.3890 |
| 1.2333 | 0.0050 | 8 | 1.2242 |
| 1.2154 | 0.0075 | 12 | 1.1705 |
| 1.1268 | 0.0100 | 16 | 1.1241 |
| 1.0556 | 0.0125 | 20 | 1.1055 |
| 1.0629 | 0.0150 | 24 | 1.0848 |
| 1.1023 | 0.0176 | 28 | 1.0764 |
| 1.102 | 0.0201 | 32 | 1.0554 |
| 1.0798 | 0.0226 | 36 | 1.0567 |
| 0.9436 | 0.0251 | 40 | 1.0365 |
| 1.0524 | 0.0276 | 44 | 1.0275 |
| 1.1201 | 0.0301 | 48 | 1.0198 |
| 1.0565 | 0.0326 | 52 | 1.0135 |
| 0.9082 | 0.0351 | 56 | 1.0084 |
| 1.0544 | 0.0376 | 60 | 0.9970 |
| 1.0034 | 0.0401 | 64 | 0.9939 |
| 0.8859 | 0.0426 | 68 | 0.9852 |
| 1.018 | 0.0451 | 72 | 0.9816 |
| 0.8901 | 0.0476 | 76 | 0.9761 |
| 0.8943 | 0.0502 | 80 | 0.9723 |
| 1.0486 | 0.0527 | 84 | 0.9718 |
| 1.0102 | 0.0552 | 88 | 0.9680 |
| 0.9617 | 0.0577 | 92 | 0.9602 |
| 0.9879 | 0.0602 | 96 | 0.9607 |
| 0.9482 | 0.0627 | 100 | 0.9523 |
| 1.0265 | 0.0652 | 104 | 0.9518 |
| 0.8865 | 0.0677 | 108 | 0.9493 |
| 1.0046 | 0.0702 | 112 | 0.9448 |
| 0.9593 | 0.0727 | 116 | 0.9384 |
| 1.0167 | 0.0752 | 120 | 0.9377 |
| 0.9041 | 0.0777 | 124 | 0.9345 |
| 0.8702 | 0.0803 | 128 | 0.9311 |
| 0.9117 | 0.0828 | 132 | 0.9333 |
| 0.936 | 0.0853 | 136 | 0.9262 |
| 0.9341 | 0.0878 | 140 | 0.9237 |
| 0.913 | 0.0903 | 144 | 0.9219 |
| 0.9205 | 0.0928 | 148 | 0.9204 |
| 0.9081 | 0.0953 | 152 | 0.9183 |
| 0.8826 | 0.0978 | 156 | 0.9162 |
| 0.9578 | 0.1003 | 160 | 0.9142 |
| 0.845 | 0.1028 | 164 | 0.9128 |
| 0.9254 | 0.1053 | 168 | 0.9102 |
| 0.9622 | 0.1078 | 172 | 0.9096 |
| 0.7854 | 0.1103 | 176 | 0.9085 |
| 0.9143 | 0.1129 | 180 | 0.9071 |
| 0.99 | 0.1154 | 184 | 0.9043 |
| 0.9855 | 0.1179 | 188 | 0.9038 |
| 0.9745 | 0.1204 | 192 | 0.9017 |
| 0.9532 | 0.1229 | 196 | 0.8998 |
| 0.9464 | 0.1254 | 200 | 0.8989 |
| 0.8713 | 0.1279 | 204 | 0.8962 |
| 0.8501 | 0.1304 | 208 | 0.8942 |
| 0.9065 | 0.1329 | 212 | 0.8936 |
| 0.8949 | 0.1354 | 216 | 0.8924 |
| 0.9504 | 0.1379 | 220 | 0.8900 |
| 0.9059 | 0.1404 | 224 | 0.8900 |
| 0.909 | 0.1429 | 228 | 0.8881 |
| 0.9684 | 0.1455 | 232 | 0.8864 |
| 0.968 | 0.1480 | 236 | 0.8865 |
| 0.9436 | 0.1505 | 240 | 0.8853 |
| 0.9166 | 0.1530 | 244 | 0.8841 |
| 0.977 | 0.1555 | 248 | 0.8825 |
| 0.9011 | 0.1580 | 252 | 0.8820 |
| 0.8842 | 0.1605 | 256 | 0.8812 |
| 0.9399 | 0.1630 | 260 | 0.8806 |
| 0.9211 | 0.1655 | 264 | 0.8791 |
| 0.8043 | 0.1680 | 268 | 0.8785 |
| 0.8406 | 0.1705 | 272 | 0.8778 |
| 0.8463 | 0.1730 | 276 | 0.8765 |
| 0.8638 | 0.1755 | 280 | 0.8762 |
| 0.894 | 0.1781 | 284 | 0.8761 |
| 0.8925 | 0.1806 | 288 | 0.8753 |
| 0.9029 | 0.1831 | 292 | 0.8754 |
| 0.809 | 0.1856 | 296 | 0.8749 |
| 0.9558 | 0.1881 | 300 | 0.8742 |
| 0.8286 | 0.1906 | 304 | 0.8736 |
| 0.8714 | 0.1931 | 308 | 0.8730 |
| 0.8562 | 0.1956 | 312 | 0.8728 |
| 0.858 | 0.1981 | 316 | 0.8723 |
| 0.9027 | 0.2006 | 320 | 0.8719 |
| 0.9023 | 0.2031 | 324 | 0.8716 |
| 0.856 | 0.2056 | 328 | 0.8712 |
| 0.8455 | 0.2082 | 332 | 0.8709 |
| 0.8886 | 0.2107 | 336 | 0.8705 |
| 0.8717 | 0.2132 | 340 | 0.8703 |
| 0.9145 | 0.2157 | 344 | 0.8700 |
| 0.9618 | 0.2182 | 348 | 0.8698 |
| 0.9083 | 0.2207 | 352 | 0.8697 |
| 0.9448 | 0.2232 | 356 | 0.8695 |
| 0.9188 | 0.2257 | 360 | 0.8693 |
| 0.8006 | 0.2282 | 364 | 0.8692 |
| 0.8222 | 0.2307 | 368 | 0.8691 |
| 0.8936 | 0.2332 | 372 | 0.8690 |
| 0.9366 | 0.2357 | 376 | 0.8689 |
| 0.9336 | 0.2382 | 380 | 0.8689 |
| 0.6878 | 0.2408 | 384 | 0.8689 |
| 0.9405 | 0.2433 | 388 | 0.8688 |
| 0.9022 | 0.2458 | 392 | 0.8688 |
| 0.8499 | 0.2483 | 396 | 0.8688 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF | featherless-ai-quants | 2024-10-31T21:37:01Z | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:grimjim/llama-3-aaditya-OpenBioLLM-8B",
"base_model:quantized:grimjim/llama-3-aaditya-OpenBioLLM-8B",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T21:23:05Z | ---
base_model: grimjim/llama-3-aaditya-OpenBioLLM-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# grimjim/llama-3-aaditya-OpenBioLLM-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [grimjim-llama-3-aaditya-OpenBioLLM-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [grimjim-llama-3-aaditya-OpenBioLLM-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/grimjim-llama-3-aaditya-OpenBioLLM-8B-GGUF/blob/main/grimjim-llama-3-aaditya-OpenBioLLM-8B-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
llmware/slim-emotions-onnx | llmware | 2024-10-31T21:36:28Z | 4 | 1 | transformers | [
"transformers",
"onnx",
"llama",
"green",
"p1",
"llmware-fx",
"emerald",
"base_model:llmware/slim-emotions",
"base_model:quantized:llmware/slim-emotions",
"license:apache-2.0",
"region:us"
] | null | 2024-06-15T00:17:10Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-emotions
base_model_relation: quantized
tags: [green, p1, llmware-fx, onnx, emerald]
---
# slim-emotions-onnx
**slim-emotions-onnx** is a specialized function calling model that classifies the emotion of a given text context passage, and generates a python dictionary with an "emotions" key and a value of the assessed emotion, e.g., ["surprised"].
This is an ONNX int4 quantized version of slim-emotions, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-emotions
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Emotions classifier designed for Agent-based multi-step workflows
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
ychu612/RSAVAV_FNSQ_CLF | ychu612 | 2024-10-31T21:36:24Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:yikuan8/Clinical-Longformer",
"base_model:finetune:yikuan8/Clinical-Longformer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-31T21:13:42Z | ---
library_name: transformers
base_model: yikuan8/Clinical-Longformer
tags:
- generated_from_trainer
model-index:
- name: RSAVAV_FNSQ_CLF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RSAVAV_FNSQ_CLF
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf | RichardErkhov | 2024-10-31T21:35:47Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T21:03:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bloom-1b7-ropes-Cont-IT-Step2 - GGUF
- Model creator: https://huggingface.co/alonzogarbanzo/
- Original model: https://huggingface.co/alonzogarbanzo/Bloom-1b7-ropes-Cont-IT-Step2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q2_K.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q2_K.gguf) | Q2_K | 0.98GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q3_K_S.gguf) | Q3_K_S | 1.1GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q3_K.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q3_K.gguf) | Q3_K | 1.2GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q3_K_M.gguf) | Q3_K_M | 1.2GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q3_K_L.gguf) | Q3_K_L | 1.25GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.IQ4_XS.gguf) | IQ4_XS | 1.27GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q4_0.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q4_0.gguf) | Q4_0 | 1.31GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.IQ4_NL.gguf) | IQ4_NL | 1.31GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q4_K_S.gguf) | Q4_K_S | 1.31GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q4_K.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q4_K.gguf) | Q4_K | 1.39GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q4_K_M.gguf) | Q4_K_M | 1.39GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q4_1.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q4_1.gguf) | Q4_1 | 1.41GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q5_0.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q5_0.gguf) | Q5_0 | 1.51GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q5_K_S.gguf) | Q5_K_S | 1.51GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q5_K.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q5_K.gguf) | Q5_K | 1.57GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q5_K_M.gguf) | Q5_K_M | 1.57GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q5_1.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q5_1.gguf) | Q5_1 | 1.61GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q6_K.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q6_K.gguf) | Q6_K | 1.72GB |
| [Bloom-1b7-ropes-Cont-IT-Step2.Q8_0.gguf](https://huggingface.co/RichardErkhov/alonzogarbanzo_-_Bloom-1b7-ropes-Cont-IT-Step2-gguf/blob/main/Bloom-1b7-ropes-Cont-IT-Step2.Q8_0.gguf) | Q8_0 | 2.23GB |
Original model description:
---
license: bigscience-bloom-rail-1.0
base_model: alonzogarbanzo/Bloom-1b7-winograd-wsc-IT-baseline
tags:
- generated_from_trainer
model-index:
- name: Bloom-1b7-ropes-Cont-IT-Step2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bloom-1b7-ropes-Cont-IT-Step2
This model is a fine-tuned version of [alonzogarbanzo/Bloom-1b7-winograd-wsc-IT-baseline](https://huggingface.co/alonzogarbanzo/Bloom-1b7-winograd-wsc-IT-baseline) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
Final Results: {'loss': 0.0261, 'grad_norm': 1.9494764804840088, 'learning_rate': 3.0000000000000004e-07, 'epoch': 10.0}
Average Results: {'train_runtime': 858.2936, 'train_samples_per_second': 2.33, 'train_steps_per_second': 0.583, 'train_loss': 0.4610937827527523, 'epoch': 10.0}
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
llmware/slim-topics-onnx | llmware | 2024-10-31T21:35:36Z | 3 | 1 | transformers | [
"transformers",
"onnx",
"llama",
"text-generation",
"green",
"p1",
"llmware-fx",
"base_model:llmware/slim-topics",
"base_model:quantized:llmware/slim-topics",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-06-15T00:22:03Z | ---
license: apache-2.0
inference: false
base_model: llmware/slim-topics
base_model_relation: quantized
tags: [green, p1, llmware-fx, onnx]
---
# slim-topics-onnx
**slim-topics-onnx** is a specialized function calling model that generates a topic description for a text passage, typically no more than 2-3 words.
This is an ONNX int4 quantized version of slim-topics, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-topics
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Topic categorization and summarization
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
rohan7998/emotion_tweet_roberta-base_2024-10-31 | rohan7998 | 2024-10-31T21:34:28Z | 196 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-31T21:34:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
llmware/llama-3.1-instruct-onnx | llmware | 2024-10-31T21:34:01Z | 10 | 1 | null | [
"onnx",
"llama",
"green",
"p8",
"llmware-chat",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-09-03T18:12:48Z | ---
license: llama3
inference: false
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
base_model_relation: quantized
tags:
- green
- p8
- llmware-chat
- onnx
---
# llama-3.1-instruct-onnx
**llama-3.1-instruct-ov** is an ONNX int4 quantized version of Llama 3.1 Instruct, providing a fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
[**llama-3.1-instruct**](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) is a leading open source general foundation model from Meta.
### Model Description
- **Developed by:** meta-llama
- **Quantized by:** llmware
- **Model type:** llama-3.1
- **Parameters:** 8 billion
- **Model Parent:** meta-llama/Meta-Llama-3.1-8B-Instruct
- **Language(s) (NLP):** English
- **License:** Llama 3.1 Community License
- **Uses:** General chat use cases
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai) |
llmware/tiny-llama-chat-onnx | llmware | 2024-10-31T21:32:18Z | 43 | 1 | null | [
"onnx",
"llama",
"green",
"llmware-chat",
"p1",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-10-26T17:58:08Z | ---
license: apache-2.0
inference: false
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
base_model_relation: quantized
tags:
- green
- llmware-chat
- p1
- onnx
---
# tiny-llama-chat-onnx
**tiny-llama-chat-onnx** is an ONNX int4 quantized version of TinyLlama-Chat, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
[**tiny-llama-chat**](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) is the official chat finetuned version of tiny-llama.
### Model Description
- **Developed by:** TinyLlama
- **Quantized by:** llmware
- **Model type:** llama
- **Parameters:** 1.1 billion
- **Model Parent:** TinyLlama-1.1B-Chat-v1.0
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Chat and general purpose LLM
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai) |
MaziyarPanahi/BADMISTRAL-1.5B-GGUF | MaziyarPanahi | 2024-10-31T21:31:48Z | 41 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:UnfilteredAI/BADMISTRAL-1.5B",
"base_model:quantized:UnfilteredAI/BADMISTRAL-1.5B",
"region:us",
"conversational"
] | text-generation | 2024-10-31T21:26:40Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: BADMISTRAL-1.5B-GGUF
base_model: UnfilteredAI/BADMISTRAL-1.5B
inference: false
model_creator: UnfilteredAI
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/BADMISTRAL-1.5B-GGUF](https://huggingface.co/MaziyarPanahi/BADMISTRAL-1.5B-GGUF)
- Model creator: [UnfilteredAI](https://huggingface.co/UnfilteredAI)
- Original model: [UnfilteredAI/BADMISTRAL-1.5B](https://huggingface.co/UnfilteredAI/BADMISTRAL-1.5B)
## Description
[MaziyarPanahi/BADMISTRAL-1.5B-GGUF](https://huggingface.co/MaziyarPanahi/BADMISTRAL-1.5B-GGUF) contains GGUF format model files for [UnfilteredAI/BADMISTRAL-1.5B](https://huggingface.co/UnfilteredAI/BADMISTRAL-1.5B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf | RichardErkhov | 2024-10-31T21:30:31Z | 42 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T18:47:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Fimbulvetr-11B-v2 - GGUF
- Model creator: https://huggingface.co/Sao10K/
- Original model: https://huggingface.co/Sao10K/Fimbulvetr-11B-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Fimbulvetr-11B-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q2_K.gguf) | Q2_K | 3.73GB |
| [Fimbulvetr-11B-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Fimbulvetr-11B-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q3_K.gguf) | Q3_K | 4.84GB |
| [Fimbulvetr-11B-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Fimbulvetr-11B-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Fimbulvetr-11B-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Fimbulvetr-11B-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Fimbulvetr-11B-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Fimbulvetr-11B-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Fimbulvetr-11B-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q4_K.gguf) | Q4_K | 6.02GB |
| [Fimbulvetr-11B-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Fimbulvetr-11B-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Fimbulvetr-11B-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Fimbulvetr-11B-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Fimbulvetr-11B-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q5_K.gguf) | Q5_K | 7.08GB |
| [Fimbulvetr-11B-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Fimbulvetr-11B-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Fimbulvetr-11B-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q6_K.gguf) | Q6_K | 8.2GB |
| [Fimbulvetr-11B-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Fimbulvetr-11B-v2-gguf/blob/main/Fimbulvetr-11B-v2.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
---

*Cute girl to catch your attention.*
**https://huggingface.co/Sao10K/Fimbulvetr-11B-v2-GGUF <------ GGUF**
Fimbulvetr-v2 - A Solar-Based Model
***
4/4 Status Update:
got a few reqs on wanting to support me: https://ko-fi.com/sao10k
anyway, status on v3 - Halted for time being, working on dataset work mainly. it's a pain, to be honest.
the data I have isn't up to my standard for now. it's good, just not good enough
***
Prompt Formats - Alpaca or Vicuna. Either one works fine.
Recommended SillyTavern Presets - Universal Light
Alpaca:
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
Vicuna:
```
System: <Prompt>
User: <Input>
Assistant:
```
****
Changelogs:
25/2 - repo renamed to remove test, model card redone. Model's officially out.
<br>15/2 - Heavy testing complete. Good feedback.
***
<details><summary>Rant - Kept For Historical Reasons</summary>
Ramble to meet minimum length requirements:
Tbh i wonder if this shit is even worth doing. Like im just some broke guy lmao I've spent so much. And for what? I guess creds. Feels good when a model gets good feedback, but it seems like im invisible sometimes. I should be probably advertising myself and my models on other places but I rarely have the time to. Probably just internal jealousy sparking up here and now. Wahtever I guess.
Anyway cool EMT vocation I'm doing is cool except it pays peanuts, damn bruh 1.1k per month lmao. Government to broke to pay for shit. Pays the bills I suppose.
Anyway cool beans, I'm either going to continue the Solar Train or go to Mixtral / Yi when I get paid.
You still here?
</details><br>
|
llmware/phi-3-onnx | llmware | 2024-10-31T21:30:29Z | 7 | 1 | null | [
"onnx",
"phi3",
"green",
"llmware-chat",
"p3",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-09-03T17:00:53Z | ---
license: apache-2.0
inference: false
base_model: microsoft/Phi-3-mini-4k-instruct
base_model_relation: quantized
tags: [green, llmware-chat, p3, onnx]
---
# phi-3-onnx
**phi-3-onnx** is an ONNX int4 quantized version of [Microsoft Phi-3-mini-4k-instruct](https://www.huggingface.co/microsoft/Phi-3-mini-4k-instruct), providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** microsoft
- **Quantized by:** llmware
- **Model type:** phi3
- **Parameters:** 3.8 billion
- **Model Parent:** microsoft/Phi-3-mini-4k-instruct
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Chat, general-purpose LLM
- **Quantization:** int4
## Model Card Contact
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
Subsets and Splits