modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Athene-V2-Agent-i1-GGUF
|
mradermacher
| 2024-11-16T06:40:09Z | 122 | 2 |
transformers
|
[
"transformers",
"gguf",
"RLHF",
"Nexusflow",
"Athene",
"Function Calling",
"Agent",
"Extraction",
"en",
"base_model:Nexusflow/Athene-V2-Agent",
"base_model:quantized:Nexusflow/Athene-V2-Agent",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T15:39:07Z |
---
base_model: Nexusflow/Athene-V2-Agent
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- RLHF
- Nexusflow
- Athene
- Function Calling
- Agent
- Extraction
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexusflow/Athene-V2-Agent
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Athene-V2-Agent-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Athene-V2-Agent-i1-GGUF/resolve/main/Athene-V2-Agent.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alrang/matchup_llama3_1b_merge
|
alrang
| 2024-11-16T06:38:47Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T06:34:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/hermes-llama3-roleplay-1000-v4-GGUF
|
mradermacher
| 2024-11-16T06:35:08Z | 32 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Deev124/hermes-llama3-roleplay-1000-v4",
"base_model:quantized:Deev124/hermes-llama3-roleplay-1000-v4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T01:22:31Z |
---
base_model: Deev124/hermes-llama3-roleplay-1000-v4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Deev124/hermes-llama3-roleplay-1000-v4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v4-GGUF/resolve/main/hermes-llama3-roleplay-1000-v4.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Chmir1662/matchup_llama3_1b_merge
|
Chmir1662
| 2024-11-16T06:33:37Z | 96 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T06:28:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mav23/MoE-Girl_400MA_1BT-GGUF
|
mav23
| 2024-11-16T06:27:07Z | 110 | 0 |
transformers
|
[
"transformers",
"gguf",
"axolotl",
"moe",
"roleplay",
"base_model:ibm-granite/granite-3.0-1b-a400m-base",
"base_model:quantized:ibm-granite/granite-3.0-1b-a400m-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T06:12:05Z |
---
library_name: transformers
license: apache-2.0
base_model: ibm-granite/granite-3.0-1b-a400m-base
tags:
- axolotl
- moe
- roleplay
model-index:
- name: MoE_Girl_400MA_1BT
results: []
---
# MoE Girl 400mA 1bT

a finetune of Granite 3.0 by IBM designed for roleplaying (and maybe general usecases if you try hard enough).
## Disclaimer
PLEASE do not expect godliness out of this, it's a model with _400 million_ active parameters. Expect something more akin to GPT-2.
## Quants
TODO!
## Prompting
Use ChatML.
```
<|im_start|>system
You are a helpful assistant who talks like a pirate.<|im_end|>
<|im_start|>user
Hello there!<|im_end|>
<|im_start|>assistant
Yarr harr harr, me matey!<|im_end|>
```
## Thanks
Special thanks to the members of Allura for testing and emotional support, as well as the creators of all the datasets that were used in the Special Sauce used to train this model. I love you all <3 - Fizz
|
dd2558/matchup_llama3_1b_merge
|
dd2558
| 2024-11-16T06:25:25Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T06:16:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF
|
mradermacher
| 2024-11-16T06:15:12Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/MagpieLM-13.3B-Chat-v0.1",
"base_model:quantized:win10/MagpieLM-13.3B-Chat-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T01:02:17Z |
---
base_model: win10/MagpieLM-13.3B-Chat-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/win10/MagpieLM-13.3B-Chat-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MagpieLM-13.3B-Chat-v0.1-GGUF/resolve/main/MagpieLM-13.3B-Chat-v0.1.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ashmal/Clima_Vicuna_13B
|
Ashmal
| 2024-11-16T06:02:58Z | 6 | 0 | null |
[
"pytorch",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-11-16T05:09:51Z |
---
license: apache-2.0
---
|
Xu-Ouyang/FloatLM_2.4B-int2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-16T06:01:17Z | 88 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-16T06:00:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jewoos/test_trainer
|
jewoos
| 2024-11-16T05:58:24Z | 163 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-16T05:57:53Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4944
- Accuracy: 0.765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6402 | 0.625 |
| No log | 2.0 | 50 | 0.4944 | 0.765 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.20.1
|
BEASTBOYJAY/my-fine-tuned-summarizer
|
BEASTBOYJAY
| 2024-11-16T05:53:44Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:ccdv/cnn_dailymail",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-16T05:36:03Z |
---
library_name: transformers
datasets:
- ccdv/cnn_dailymail
language:
- en
base_model:
- google-bert/bert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model is used for making or generating summary of the provided paragraph.
- **Developed by:** BEASTBOYJAY
- **Model type:** Transformer(encoder)
- **Language(s) (NLP):** English
- **Finetuned from model:** Bert-base-uncased
## Uses
- For the summarization purpose only
## Bias, Risks, and Limitations
This model is fine-tuned on very small dataset can need more fine-tuning for better results.(Fine-tuned this model only for eductional purposes)
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import EncoderDecoderModel, BertTokenizer
class TextSummarizer:
def __init__(self, model_path, tokenizer_name="bert-base-uncased"):
self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
self.model = EncoderDecoderModel.from_pretrained(model_path)
def summarize(self, text, max_input_length=512):
inputs = self.tokenizer(
text,
return_tensors="pt",
truncation=True,
padding="max_length",
max_length=max_input_length,
)
summary_ids = self.model.generate(
inputs["input_ids"],
attention_mask=inputs["attention_mask"],
decoder_start_token_id=self.tokenizer.cls_token_id,
max_length=128,
num_beams=4,
length_penalty=1.5,
no_repeat_ngram_size=1,
early_stopping=True,
)
summary = self.tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
if __name__ == "__main__":
summarizer = TextSummarizer(model_path="BEASTBOYJAY/my-fine-tuned-summarizer")
test_article = "Your article or paragraph"
summary = summarizer.summarize(test_article)
print("Generated Summary:", summary)
```
|
RichardErkhov/unsloth_-_SmolLM-1.7B-gguf
|
RichardErkhov
| 2024-11-16T05:52:04Z | 6 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-11-16T05:26:43Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SmolLM-1.7B - GGUF
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/SmolLM-1.7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SmolLM-1.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q2_K.gguf) | Q2_K | 0.63GB |
| [SmolLM-1.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q3_K_S.gguf) | Q3_K_S | 0.72GB |
| [SmolLM-1.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q3_K.gguf) | Q3_K | 0.8GB |
| [SmolLM-1.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q3_K_M.gguf) | Q3_K_M | 0.8GB |
| [SmolLM-1.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q3_K_L.gguf) | Q3_K_L | 0.87GB |
| [SmolLM-1.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.IQ4_XS.gguf) | IQ4_XS | 0.88GB |
| [SmolLM-1.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q4_0.gguf) | Q4_0 | 0.92GB |
| [SmolLM-1.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.IQ4_NL.gguf) | IQ4_NL | 0.93GB |
| [SmolLM-1.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q4_K_S.gguf) | Q4_K_S | 0.93GB |
| [SmolLM-1.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q4_K.gguf) | Q4_K | 0.98GB |
| [SmolLM-1.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q4_K_M.gguf) | Q4_K_M | 0.98GB |
| [SmolLM-1.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q4_1.gguf) | Q4_1 | 1.02GB |
| [SmolLM-1.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q5_0.gguf) | Q5_0 | 1.11GB |
| [SmolLM-1.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.11GB |
| [SmolLM-1.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q5_K.gguf) | Q5_K | 1.14GB |
| [SmolLM-1.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.14GB |
| [SmolLM-1.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q5_1.gguf) | Q5_1 | 1.2GB |
| [SmolLM-1.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q6_K.gguf) | Q6_K | 1.31GB |
| [SmolLM-1.7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_SmolLM-1.7B-gguf/blob/main/SmolLM-1.7B.Q8_0.gguf) | Q8_0 | 1.7GB |
Original model description:
---
license: apache-2.0
base_model: HuggingFaceTB/SmolLM-1.7B
tags:
- alignment-handbook
- trl
- unsloth
datasets:
- Magpie-Align/Magpie-Pro-300K-Filtered
- bigcode/self-oss-instruct-sc2-exec-filter-50k
- teknium/OpenHermes-2.5
- HuggingFaceTB/everyday-conversations-llama3.1-2k
library_name: transformers
language:
- en
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here - also works for SmolLM!: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# SmolLM-1.7B-Instruct
<center>
<img src="https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png" alt="SmolLM" width="1100" height="600">
</center>
## Model Summary
SmolLM is a series of small language models available in three sizes: 135M, 360M, and 1.7B parameters.
These models are pre-trained on [SmolLM-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus), a curated collection of high-quality educational and synthetic data designed for training LLMs. For further details, we refer to our [blogpost](https://huggingface.co/blog/smollm).
To build SmolLM-Instruct, we finetuned the base models on publicly available datasets.
## Changelog
|Release|Description|
|-|-|
|v0.1| Initial release of SmolLM-Instruct. We finetune on the permissive subset of the [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) dataset, combined with [StarCoder2-Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Then, we perform DPO (Direct Preference Optimization) for one epoch on [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) for the 135M and 1.7B models, and [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) for the 360M model.|
|v0.2| We changed the finetuning mix to datasets more suitable for smol models. We train on a new dataset of 2k simple everyday conversations we generated by llama3.1-70B [everyday-conversations-llama3.1-2k](https://huggingface.co/datasets/HuggingFaceTB/everyday-conversations-llama3.1-2k/), [Magpie-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered), [StarCoder2-Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k), and a small subset of [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)|
v0.2 models are better at staying on topic and responding appropriately to standard prompts, such as greetings and questions about their role as AI assistants. SmolLM-360M-Instruct (v0.2) has a 63.3% win rate over SmolLM-360M-Instruct (v0.1) on AlpacaEval. You can find the details [here](https://huggingface.co/datasets/HuggingFaceTB/alpaca_eval_details/).
You can load v0.1 checkpoint by specifying `revision="v0.1"` in the transformers code:
```python
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-1.7B-Instruct", revision="v0.1")
```
## Usage
### Local Applications
⚡ For local applications, you can find optimized implementations of the model in MLC, GGUF and Transformers.js formats, in addition to fast in-browser demos in this collection: https://huggingface.co/collections/HuggingFaceTB/local-smollms-66c0f3b2a15b4eed7fb198d0
We noticed that 4bit quantization degrades the quality of the 135M and 360M, so we use `q016` for MLC and ONNX/Transformers.js checkpoints for the WebGPU demos. We also suggest using temperature 0.2 and top-p 0.9.
### Transformers
```bash
pip install transformers
```
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM-1.7B-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM-1.7B-Instruct --device cpu
```
## Limitations
Additionally, the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data, we invite users to leverage them as assistive tools rather than definitive sources of information. We find that they can handle general knowledge questions, creative writing and basic Python programming. But they are English only and may have difficulty with arithmetics, editing tasks and complex reasoning. For more details about the models' capabilities, please refer to our [blog post](https://huggingface.co/blog/smollm).
## Training parameters
We train the models using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) with the datasets mentioned in the changelog, using these parameters v0.2 (most of them are from Zephyr Gemma recipe):
- 1 epoch
- lr 1e-3
- cosine schedule
- warmup ratio 0.1
- global batch size 262k tokens
You can find the training recipe here: https://github.com/huggingface/alignment-handbook/tree/smollm/recipes/smollm
# Citation
```bash
@misc{allal2024SmolLM,
title={SmolLM - blazingly fast and remarkably powerful},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf
|
RichardErkhov
| 2024-11-16T05:19:33Z | 11 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T04:07:36Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
wafl-phi3.5-mini-instruct - GGUF
- Model creator: https://huggingface.co/fractalego/
- Original model: https://huggingface.co/fractalego/wafl-phi3.5-mini-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [wafl-phi3.5-mini-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q2_K.gguf) | Q2_K | 1.32GB |
| [wafl-phi3.5-mini-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [wafl-phi3.5-mini-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q3_K.gguf) | Q3_K | 1.82GB |
| [wafl-phi3.5-mini-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [wafl-phi3.5-mini-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [wafl-phi3.5-mini-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [wafl-phi3.5-mini-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q4_0.gguf) | Q4_0 | 2.03GB |
| [wafl-phi3.5-mini-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [wafl-phi3.5-mini-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [wafl-phi3.5-mini-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q4_K.gguf) | Q4_K | 2.23GB |
| [wafl-phi3.5-mini-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [wafl-phi3.5-mini-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q4_1.gguf) | Q4_1 | 2.24GB |
| [wafl-phi3.5-mini-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q5_0.gguf) | Q5_0 | 2.46GB |
| [wafl-phi3.5-mini-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [wafl-phi3.5-mini-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q5_K.gguf) | Q5_K | 2.62GB |
| [wafl-phi3.5-mini-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [wafl-phi3.5-mini-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q5_1.gguf) | Q5_1 | 2.68GB |
| [wafl-phi3.5-mini-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q6_K.gguf) | Q6_K | 2.92GB |
| [wafl-phi3.5-mini-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/fractalego_-_wafl-phi3.5-mini-instruct-gguf/blob/main/wafl-phi3.5-mini-instruct.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quadranttechnologies/qhub-blip-image-captioning-finetuned
|
quadranttechnologies
| 2024-11-16T05:08:41Z | 295 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"blip",
"image-text-to-text",
"art",
"image-to-text",
"en",
"dataset:phiyodr/coco2017",
"arxiv:2201.12086",
"base_model:Salesforce/blip-image-captioning-base",
"base_model:finetune:Salesforce/blip-image-captioning-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-11-07T23:29:13Z |
---
language:
- en
base_model:
- Salesforce/blip-image-captioning-base
pipeline_tag: image-to-text
tags:
- art
license: apache-2.0
metrics:
- bleu
library_name: transformers
datasets:
- phiyodr/coco2017
---
### Fine-Tuned Image Captioning Model
This is a fine-tuned version of BLIP for visual answering on retail product images. This model is finetuned on custom dataset with images from online retail platform and annotated with product description.
This experimental model can be used for answering questions on product images in retail industry. Product meta data enrichment, Validation of human generated product description are some of the examples sue case.
# Sample model predictions
| Input Image | Prediction |
|-------------------------------------------|--------------------------------|
|<img src="https://cdn-uploads.huggingface.co/production/uploads/672d17c98e098bf429c83670/KTnUTaTjrIG7dUyR1aMho.png" alt="image/png" width="100" height="100" /> | kitchenaid artisann stand mixer|
|<img src="https://cdn-uploads.huggingface.co/production/uploads/672d17c98e098bf429c83670/Skt_sjYxbfQu056v2C1Ym.png" width="100" height="100" /> | a bottle of milk sitting on a counter |
|<img src="https://cdn-uploads.huggingface.co/production/uploads/672d17c98e098bf429c83670/Zp1OMzO4BEs7s9k3O5ij7.jpeg" alt="image/jpeg" width="100" height="100" />| dove sensitive skin lotion |
|<img src="https://cdn-uploads.huggingface.co/production/uploads/672d17c98e098bf429c83670/dYNo38En0M0WpKONS8StX.jpeg" alt="bread bag" width="100" height="100" /> | bread bag with blue plastic handl|
|<img src="https://cdn-uploads.huggingface.co/production/uploads/672d17c98e098bf429c83670/oypT9482ysQjC0usEHGbT.png" alt="image/png" width="100" height="100" /> | bush ' s best white beans |
### How to use the model:
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("quadranttechnologies/qhub-blip-image-captioning-finetuned")
model = BlipForConditionalGeneration.from_pretrained("quadranttechnologies/qhub-blip-image-captioning-finetuned")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Xu-Ouyang/FloatLM_3.9B-int2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-16T05:04:28Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-16T05:03:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF
|
featherless-ai-quants
| 2024-11-16T04:46:45Z | 6 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Slomb/MN-CelesteGold-12B-Merge",
"base_model:quantized:Slomb/MN-CelesteGold-12B-Merge",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-16T04:33:04Z |
---
base_model: Slomb/MN-CelesteGold-12B-Merge
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Slomb/MN-CelesteGold-12B-Merge GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Slomb-MN-CelesteGold-12B-Merge-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [Slomb-MN-CelesteGold-12B-Merge-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [Slomb-MN-CelesteGold-12B-Merge-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [Slomb-MN-CelesteGold-12B-Merge-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [Slomb-MN-CelesteGold-12B-Merge-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [Slomb-MN-CelesteGold-12B-Merge-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [Slomb-MN-CelesteGold-12B-Merge-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [Slomb-MN-CelesteGold-12B-Merge-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [Slomb-MN-CelesteGold-12B-Merge-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [Slomb-MN-CelesteGold-12B-Merge-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [Slomb-MN-CelesteGold-12B-Merge-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Slomb-MN-CelesteGold-12B-Merge-GGUF/blob/main/Slomb-MN-CelesteGold-12B-Merge-Q8_0.gguf) | 12419.10 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Nutanix/llama-30b_checkpoint-3800_20241116-043248-merged
|
Nutanix
| 2024-11-16T04:44:14Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T04:33:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
axolotl-ai-co/SmolLM2-135M-bnb-nf4-bf16
|
axolotl-ai-co
| 2024-11-16T04:38:09Z | 2,011 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-15T21:35:05Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 135M model was trained on 2 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
```bash
pip install transformers
```
#### Running the model on CPU/GPU/multi GPU
* _Using full precision_
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-135M"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "HuggingFaceTB/SmolLM2-135M"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for fp16 use `torch_dtype=torch.float16` instead
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 723.56 MB
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base pre-trained model
| Metrics | SmolLM2-135M-8k | SmolLM-135M |
|:-------------------|:----------------:|:------------:|
| HellaSwag | **42.1** | 41.2 |
| ARC (Average) | **43.9** | 42.4 |
| PIQA | 68.4 | 68.4 |
| MMLU (cloze) | **31.5** | 30.2 |
| CommonsenseQA | **33.9** | 32.7 |
| TriviaQA | 4.1 | **4.3** |
| Winogrande | 51.3 | 51.3 |
| OpenBookQA | **34.6** | 34.0 |
| GSM8K (5-shot) | **1.4** | 1.0 |
## Instruction model
| Metric | SmolLM2-135M-Instruct | SmolLM-135M-Instruct |
|:-----------------------------|:---------------------:|:--------------------:|
| IFEval (Average prompt/inst) | **29.9** | 17.2 |
| MT-Bench | **1.98** | 1.68 |
| HellaSwag | **40.9** | 38.9 |
| ARC (Average) | **37.3** | 33.9 |
| PIQA | **66.3** | 64.0 |
| MMLU (cloze) | **29.3** | 28.3 |
| BBH (3-shot) | **28.2** | 25.2 |
| GSM8K (5-shot) | 1.4 | 1.4 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 2T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
danielthx/videomae-base-finetuned-ucf101-subset
|
danielthx
| 2024-11-16T04:15:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-11-16T04:15:20Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4855
- Accuracy: 0.8516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.0939 | 0.2568 | 38 | 1.7895 | 0.6143 |
| 0.809 | 1.2568 | 76 | 0.8634 | 0.7143 |
| 0.4345 | 2.2568 | 114 | 0.4870 | 0.8286 |
| 0.2773 | 3.2297 | 148 | 0.3680 | 0.9286 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Xu-Ouyang/FloatLM_1.1B-int2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-16T04:02:00Z | 80 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-16T04:01:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/StableCode-text2SQL-schemaReduzido-GGUF
|
mradermacher
| 2024-11-16T03:52:11Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:NESPED-GEN/StableCode-text2SQL-schemaReduzido",
"base_model:quantized:NESPED-GEN/StableCode-text2SQL-schemaReduzido",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T03:28:31Z |
---
base_model: NESPED-GEN/StableCode-text2SQL-schemaReduzido
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NESPED-GEN/StableCode-text2SQL-schemaReduzido
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/StableCode-text2SQL-schemaReduzido-GGUF/resolve/main/StableCode-text2SQL-schemaReduzido.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xu-Ouyang/FloatLM_830M-int2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-16T03:51:03Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-16T03:50:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huwhitememes/tulsigabbard-lora
|
huwhitememes
| 2024-11-16T03:41:18Z | 11 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-16T03:40:01Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/tulsigabbard-lora_009920_00_20241115183657.png
text: A photo of Tulsi Gabbard, Tulsi gabbard, Tulsi,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of Tulsi Gabbard, Tulsi gabbard, Tulsi,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# tulsigabbard-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of Tulsi Gabbard, Tulsi gabbard, Tulsi,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF
|
mradermacher
| 2024-11-16T03:40:11Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:kuyesu22/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG",
"base_model:quantized:kuyesu22/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-16T01:52:34Z |
---
base_model: kuyesu22/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/kuyesu22/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG-i1-GGUF/resolve/main/Llama-3.2-3B-Instruct-Sunbird-Dialogue-RAG.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PeterLiuWT00376/xlm-roberta-base-finetuned-panx-de
|
PeterLiuWT00376
| 2024-11-16T03:35:06Z | 133 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-16T03:22:16Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- F1: 0.8579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2605 | 1.0 | 525 | 0.1524 | 0.8284 |
| 0.1275 | 2.0 | 1050 | 0.1351 | 0.8535 |
| 0.0796 | 3.0 | 1575 | 0.1356 | 0.8579 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
AlekseyCalvin/Akhmatova_Flux_LoRA_SilverAgePoets_v3_DeDistilledTrained
|
AlekseyCalvin
| 2024-11-16T03:34:00Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"image-generation",
"flux-diffusers",
"photo",
"realism",
"character",
"historical person",
"poetry",
"literature",
"history",
"archival",
"text-to-image",
"en",
"base_model:AlekseyCalvin/Colossus_2.1_dedistilled_by_AfroMan4peace",
"base_model:adapter:AlekseyCalvin/Colossus_2.1_dedistilled_by_AfroMan4peace",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-11-16T02:59:16Z |
---
license: apache-2.0
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
- image-generation
- flux-diffusers
- diffusers
- photo
- realism
- character
- historical person
- poetry
- literature
- history
- archival
base_model: "AlekseyCalvin/Colossus_2.1_dedistilled_by_AfroMan4peace"
pipeline_tag: text-to-image
library_name: diffusers
emoji: 🔜
instance_prompt: Anna AKHMATOVA, blemished skin texture with slight wrinkles
widget:
- text: >-
agitprop Constructivist poster of the poet Anna AKHMATOVA calling out "JOIN RCA!" in a speech bubble, over satirical cartoon of cool punky diverse teenage gen-z revolutionaries
output:
url: AkhmDedistilled1.jpg
- text: >-
vintage side-view photograph of young Anna AKHMATOVA, classic analog color photography
output:
url: AnnaPoeticsWill.jpg
---
<Gallery />
# Anna Akhmatova Flux Low-Rank Adapter (LoRA) Version 2 by SilverAgePoets.com
Trained on a dataset of 60 vintage photos (most of them colorized by us and/or by [Klimbim](https://klimbim2020.wordpress.com/)). <br>
And capturing the legendary **poet**: <br>
**Anna Andreevna Akhmatova** <br> *(b.06/26/1889-d.03/05/1966)* <br>
For this LoRA we used highly detailed manually-composed paragraph captions. <br>
It was trained for 1600 steps (a 1300 checkpoint also added) at a Diffusion-Transformer Learning Rate of .0004, dim/alpha of 32, batch 1, AdamW8bit optimizer! Minimal synthetic data (just a few reluctant upscales), zero auto-generated captions! <br>
**VERSION 3 NOTE:** <br>
This third version of the Akhmatova LoRA was trained on the **Colossus 2.1 Dedistilled Flux model by AfroMan4Peace**, available [here](https://huggingface.co/AlekseyCalvin/Colossus_2.1_dedistilled_by_AfroMan4peace) in a diffusers format and [here at CivitAI](https://civitai.com/models/833086/colossus-project-flux). <br>
As of writing this blurb, we haven't yet tested this LoRA enough to say much concretely, but our other adapters trained over de-distilled modifications of FLUX have been shown to be more versatile than most base-model trained LoRAs in regards to compatibility and output variability. <br>
In parallel, we've also trained yet another Akhmatova LoRA (version 2) over a regular version of Flux, to enable a better basis for comparative testing. That version is available in a different repo [here](https://huggingface.co/AlekseyCalvin/Akhmatova_Flux_LoRA_SilverAgePoets_v2_regularFluxD). <br>
**MORE INFO:** <br>
This is a **rank-32 historical LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...) <br>
Use it to diffusely diversify the presence of Akhmatova's deathless visage in our strange latter-day world!
And once you're faced with this poet's iconic penetrating stare, do lend your ears to her as well: listen in to her voice!
Wherefrom might this voice resound for you? A dusty paperback? Google search? Maybe a clip on YouTube? Or, say, your very memory reciting verses suddenly recalled?<br>
In any case, we'll offer you some echoes to rely on, if you will:
Namely, our **translations of Akhmatova's verse-works**, adapted from a proto-Soviet song-tongue into a Worldish one...<br>
And found, along with many other poets' songs and tomes...
Over **at [SilverAgePoets.com](https://www.silveragepoets.com/akhmatovamain)!**
## Trigger words
You should use `AKHMATOVA` or `Anna Akhmatova` or `vintage autochrome photograph of Anna Akhmatova` to summon the poet's latent spirit.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AlekseyCalvin/Akhmatova_Flux_LoRA_SilverAgePoets_v2_regularFluxD', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/FluxiIA-Small_Brisa-GGUF
|
mradermacher
| 2024-11-16T03:33:12Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:J-LAB/FluxiIA-Small_Brisa",
"base_model:quantized:J-LAB/FluxiIA-Small_Brisa",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T01:34:36Z |
---
base_model: J-LAB/FluxiIA-Small_Brisa
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/J-LAB/FluxiIA-Small_Brisa
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/FluxiIA-Small_Brisa-GGUF/resolve/main/FluxiIA-Small_Brisa.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/internlm2-chat-7b-llama-GGUF
|
mradermacher
| 2024-11-16T03:28:12Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:bartowski/internlm2-chat-7b-llama",
"base_model:quantized:bartowski/internlm2-chat-7b-llama",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T20:43:30Z |
---
base_model: bartowski/internlm2-chat-7b-llama
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bartowski/internlm2-chat-7b-llama
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/internlm2-chat-7b-llama-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2-chat-7b-llama-GGUF/resolve/main/internlm2-chat-7b-llama.f16.gguf) | f16 | 15.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xu-Ouyang/FloatLM_190M-int2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-16T03:21:31Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-16T03:21:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GitBag/reasoning_rebel_iter_4_1731513485_eta_1e2_lr_3e-7_1731709582
|
GitBag
| 2024-11-16T03:21:15Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T03:04:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HappyAIUser/AtmaSiddhiGPTv20-16bit
|
HappyAIUser
| 2024-11-16T03:21:04Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T02:25:32Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/FloatLM_99M-int2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-16T03:16:04Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-16T03:15:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
getad72493/smacnkd
|
getad72493
| 2024-11-16T03:12:43Z | 16 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-11-15T17:42:32Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Photo of an 18-year-old girl, pretty face, (white hijab), young, teen,
seragamsmahijab, <lora:SeragamSMAHijab_v1:1>, caught naked
parameters:
negative_prompt: >-
((negative_hand-neg)), EasyNegative, bad-artist, mole, ugly face, missing
fingers, bad fingers, (old), (mature), low resolution, watermark, text,
logo, flat background, monochrome, grayscale, dark background, skinny
body, small tits, small breasts, big forehead, ((hair)), nude, nsfw
output:
url: images/00000-1909920771.jpeg
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: caught naked, seragamsmahijab
---
# smacnkd
<Gallery />
## Trigger words
You should use `caught naked` to trigger the image generation.
You should use `seragamsmahijab` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/getad72493/smacnkd/tree/main) them in the Files & versions tab.
|
zakariamtl/amina
|
zakariamtl
| 2024-11-16T03:07:09Z | 11 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-16T03:07:07Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Amina
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zakariamtl/amina', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
VoHuuTriDung/bert-finetuned-ner
|
VoHuuTriDung
| 2024-11-16T02:58:25Z | 105 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-16T02:44:31Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9364027823782709
- name: Recall
type: recall
value: 0.9515314708852238
- name: F1
type: f1
value: 0.9439065108514191
- name: Accuracy
type: accuracy
value: 0.986504385706717
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9364
- Recall: 0.9515
- F1: 0.9439
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0743 | 1.0 | 1756 | 0.0601 | 0.9113 | 0.9409 | 0.9259 | 0.9834 |
| 0.0342 | 2.0 | 3512 | 0.0657 | 0.9382 | 0.9478 | 0.9430 | 0.9858 |
| 0.0211 | 3.0 | 5268 | 0.0611 | 0.9364 | 0.9515 | 0.9439 | 0.9865 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Marialab/whisper-small-dr-ar-TREL
|
Marialab
| 2024-11-16T02:55:42Z | 76 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Custom_activation2_from_scratch train_whisper(12layerschange,10000_2000_2000_200)",
"generated_from_trainer",
"ar",
"dataset:darija-c",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-16T02:54:43Z |
---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-small
tags:
- Custom_activation2_from_scratch train_whisper(12layerschange,10000_2000_2000_200)
- generated_from_trainer
datasets:
- darija-c
metrics:
- bleu
model-index:
- name: 'Whisper small darija translate TREL '
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small darija translate TREL
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Darija-C dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Bleu: 0.2051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:--------:|:-----:|:---------------:|:------:|
| 0.7202 | 133.3333 | 2000 | 0.5776 | 0.0167 |
| 0.0771 | 266.6667 | 4000 | 0.0292 | 0.5327 |
| 0.0003 | 400.0 | 6000 | 0.0003 | 0.4464 |
| 0.0001 | 533.3333 | 8000 | 0.0001 | 0.2492 |
| 0.0001 | 666.6667 | 10000 | 0.0001 | 0.2051 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 2.19.2
- Tokenizers 0.20.3
|
noneUsername/SauerkrautLM-v2-14b-DPO-W8A8-Dynamic-Per-Token
|
noneUsername
| 2024-11-16T02:46:16Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:VAGOsolutions/SauerkrautLM-v2-14b-DPO",
"base_model:finetune:VAGOsolutions/SauerkrautLM-v2-14b-DPO",
"8-bit",
"region:us"
] | null | 2024-11-16T02:32:09Z |
---
base_model:
- VAGOsolutions/SauerkrautLM-v2-14b-DPO
---
vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.848|± |0.0228|
| | |strict-match | 5|exact_match|↑ |0.896|± |0.0193|
vllm (pretrained=/root/autodl-tmp/SauerkrautLM-v2-14b-DPO,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.832|± |0.0237|
| | |strict-match | 5|exact_match|↑ |0.852|± |0.0225|
|
Rich-J/subnet29_upload_c02_N15_0
|
Rich-J
| 2024-11-16T02:41:06Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T02:38:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/gemma-2-9b-it-v2.1-GGUF
|
mradermacher
| 2024-11-16T02:40:11Z | 50 | 1 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"krx",
"en",
"base_model:homeb82784/gemma-2-9b-it-v2.1",
"base_model:quantized:homeb82784/gemma-2-9b-it-v2.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T02:02:32Z |
---
base_model: homeb82784/gemma-2-9b-it-v2.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
- krx
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/homeb82784/gemma-2-9b-it-v2.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-v2.1-GGUF/resolve/main/gemma-2-9b-it-v2.1.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Setpember/Jon_GPT2M_DPO_props_epi_2
|
Setpember
| 2024-11-16T02:38:15Z | 197 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T02:35:12Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekinakyurek/marc-8B-finetuned-llama3
|
ekinakyurek
| 2024-11-16T02:30:31Z | 30 | 3 | null |
[
"pytorch",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-11-10T16:49:07Z |
---
license: apache-2.0
---
|
relaxml/Llama-3.1-405B-Instruct-QTIP-2Bit
|
relaxml
| 2024-11-16T02:29:26Z | 7 | 3 | null |
[
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.1-405B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-405B-Instruct",
"region:us"
] | null | 2024-10-16T05:51:47Z |
---
base_model:
- meta-llama/Llama-3.1-405B-Instruct
---

|
mradermacher/internlm2_5-7b-chat-i1-GGUF
|
mradermacher
| 2024-11-16T02:28:11Z | 27 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:internlm/internlm2_5-7b-chat",
"base_model:quantized:internlm/internlm2_5-7b-chat",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-16T00:32:14Z |
---
base_model: internlm/internlm2_5-7b-chat
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/internlm/internlm2_5-7b-chat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF/resolve/main/internlm2_5-7b-chat.i1-Q6_K.gguf) | i1-Q6_K | 6.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Setpember/Jon_GPT2M_DPO_props_epi_1
|
Setpember
| 2024-11-16T02:25:51Z | 198 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T02:25:07Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnx-community/camembertv2-base-xnli
|
onnx-community
| 2024-11-16T02:21:59Z | 5 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"roberta",
"text-classification",
"base_model:almanach/camembertv2-base-xnli",
"base_model:quantized:almanach/camembertv2-base-xnli",
"region:us"
] |
text-classification
| 2024-11-15T21:29:58Z |
---
library_name: transformers.js
base_model: almanach/camembertv2-base-xnli
---
https://huggingface.co/almanach/camembertv2-base-xnli with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
onnx-community/camembertav2-base
|
onnx-community
| 2024-11-16T02:21:16Z | 6 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"deberta-v2",
"feature-extraction",
"base_model:almanach/camembertav2-base",
"base_model:quantized:almanach/camembertav2-base",
"region:us"
] |
feature-extraction
| 2024-11-15T21:32:04Z |
---
library_name: transformers.js
base_model: almanach/camembertav2-base
---
https://huggingface.co/almanach/camembertav2-base with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
onnx-community/camembertav2-base-ftb-ner
|
onnx-community
| 2024-11-16T02:19:55Z | 8 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"deberta-v2",
"token-classification",
"base_model:almanach/camembertav2-base-ftb-ner",
"base_model:quantized:almanach/camembertav2-base-ftb-ner",
"region:us"
] |
token-classification
| 2024-11-15T21:32:35Z |
---
library_name: transformers.js
base_model: almanach/camembertav2-base-ftb-ner
---
https://huggingface.co/almanach/camembertav2-base-ftb-ner with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
skywalker290/Timesformer-vivit-d1
|
skywalker290
| 2024-11-16T02:11:41Z | 68 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"base_model:finetune:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-11-15T23:26:27Z |
---
library_name: transformers
license: mit
base_model: google/vivit-b-16x2-kinetics400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Timesformer-vivit-d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Timesformer-vivit-d1
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7607
- Accuracy: 0.7557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 12010
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0024 | 0.1 | 1201 | 2.5898 | 0.6116 |
| 0.7957 | 1.1 | 2402 | 1.8821 | 0.6666 |
| 0.5344 | 2.1 | 3603 | 1.7371 | 0.6686 |
| 0.2148 | 3.1 | 4804 | 1.4470 | 0.7413 |
| 0.883 | 4.1 | 6005 | 1.7974 | 0.6735 |
| 0.0012 | 5.1 | 7206 | 1.5739 | 0.7386 |
| 0.0008 | 6.1 | 8407 | 1.7734 | 0.7307 |
| 1.8254 | 7.1 | 9608 | 1.4496 | 0.7704 |
| 0.6005 | 8.1 | 10809 | 1.8740 | 0.7504 |
| 0.0002 | 9.1 | 12010 | 1.7607 | 0.7557 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/gemma-soap-best-merged-GGUF
|
mradermacher
| 2024-11-16T02:02:39Z | 12 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Farhang87/gemma-soap-best-merged",
"base_model:quantized:Farhang87/gemma-soap-best-merged",
"endpoints_compatible",
"region:us"
] | null | 2024-11-16T01:46:03Z |
---
base_model: Farhang87/gemma-soap-best-merged
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Farhang87/gemma-soap-best-merged
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-soap-best-merged-GGUF/resolve/main/gemma-soap-best-merged.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
e22vvb/mt5-base_EN_TH_sch_wiki_EN_TH_spider
|
e22vvb
| 2024-11-16T02:00:29Z | 113 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-15T14:50:14Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: mt5-base_EN_TH_sch_wiki_EN_TH_spider
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_EN_TH_sch_wiki_EN_TH_spider
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge2 Precision: 0.011
- Rouge2 Recall: 0.0037
- Rouge2 Fmeasure: 0.005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0 | 1.0 | 9693 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 2.0 | 19386 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 3.0 | 29079 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 4.0 | 38772 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 5.0 | 48465 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 6.0 | 58158 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 7.0 | 67851 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 8.0 | 77544 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 9.0 | 87237 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 10.0 | 96930 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 11.0 | 106623 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 12.0 | 116316 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 13.0 | 126009 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 14.0 | 135702 | nan | 0.011 | 0.0037 | 0.005 |
| 0.0 | 15.0 | 145395 | nan | 0.011 | 0.0037 | 0.005 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.2.2
- Datasets 2.16.1
- Tokenizers 0.20.3
|
premanthcharan/Image_Captioning_Model
|
premanthcharan
| 2024-11-16T01:59:02Z | 28 | 1 | null |
[
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"image-captioning",
"Transformers",
"arxiv:1405.0312",
"arxiv:2101.10804",
"arxiv:1810.04020",
"arxiv:2010.11929",
"arxiv:1512.03385",
"arxiv:1502.03044",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2024-11-12T22:56:53Z |
---
tags:
- image-to-text
- image-captioning
- Transformers
- vision-encoder-decoder
license: apache-2.0
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# The Illustrated Image Captioning using transformers model

# Table of Contents
- [1. Introduction](#1-introduction)
- [2. Dataset Used](#2-dataset-used)
- [3. Installation](#3-installation)
- [4. Models and Technologies Used](#4-models-and-technologies-used)
- [5. Steps for Code Explanation](#5-steps-for-code-explanation)
- [6. Results and Analysis](#6-results-and-analysis)
- [7. Evaluation Metrics](#7-evaluation-metrics)
- [8. References](#8-references)
## 1. Introduction
This repository, Image captioning is a challenging problem that involves generating human-like descriptions for images. By utilizing Vision Transformers, this project aims to achieve improved image understanding and caption generation. The combination of computer vision and Transformers has shown promising results in various natural language processing tasks, and this project explores their application to image captioning.
## 2. Dataset Used
### About MS COCO dataset
The Microsoft **C**ommon **O**bjects in **CO**ntext (MS COCO) dataset is a large-scale dataset for scene understanding. The dataset is commonly used to train and benchmark object detection, segmentation, and captioning algorithms.

You can read more about the dataset on the [website](http://cocodataset.org/#home), [research paper](https://arxiv.org/pdf/1405.0312.pdf), or Appendix section at the end of this page.
## 3. Installation
### Install COCO API
1. Clone this repo: https://github.com/cocodataset/cocoapi
```
git clone https://github.com/cocodataset/cocoapi.git
```
2. Setup the coco API (also described in the readme [here](https://github.com/cocodataset/cocoapi))
```
cd cocoapi/PythonAPI
make
cd ..
```
3. Download some specific data from here: http://cocodataset.org/#download (described below)
* Under **Annotations**, download:
* **2017 Train/Val annotations [241MB]** (extract captions_train2017.json and captions_val2017.json, and place at locations cocoapi/annotations/captions_train2017.json and cocoapi/annotations/captions_val2017.json, respectively)
* **2017 Testing Image info [1MB]** (extract image_info_test2017.json and place at location cocoapi/annotations/image_info_test2017.json)
* Under **Images**, download:
* **2017 Train images [83K/13GB]** (extract the train2017 folder and place at location cocoapi/images/train2017/)
* **2017 Val images [41K/6GB]** (extract the val2017 folder and place at location cocoapi/images/val2017/)
* **2017 Test images [41K/6GB]** (extract the test2017 folder and place at location cocoapi/images/test2017/)
## 3. Installation
## Preparing the environment
**Note**: I have developed this project on Mac. It can surely be run on Windows and linux with some little changes.
1. Clone the repository, and navigate to the downloaded folder.
```
git clone https://github.com/CapstoneProjectimagecaptioning/image_captioning_transformer.git
cd image_captioning_transformer
```
2. Create (and activate) a new environment, named `captioning_env` with Python 3.7. If prompted to proceed with the install `(Proceed [y]/n)` type y.
```shell
conda create -n captioning_env python=3.7
source activate captioning_env
```
At this point your command line should look something like: `(captioning_env) <User>:image_captioning <user>$`. The `(captioning_env)` indicates that your environment has been activated, and you can proceed with further package installations.
6. Before you can experiment with the code, you'll have to make sure that you have all the libraries and dependencies required to support this project. You will mainly need Python3.7+, PyTorch and its torchvision, OpenCV, and Matplotlib. You can install dependencies using:
```
pip install -r requirements.txt
```
7. Navigate back to the repo. (Also, your source environment should still be activated at this point.)
```shell
cd image_captioning
```
8. Open the directory of notebooks, using the below command. You'll see all of the project files appear in your local environment; open the first notebook and follow the instructions.
```shell
jupyter notebook
```
9. Once you open any of the project notebooks, make sure you are in the correct `captioning_env` environment by clicking `Kernel > Change Kernel > captioning_env`.
## 4. Models and Technologies Used
### The following methods and techniques are employed in this project:
- Vision Transformers (ViTs)
- Attention mechanisms
- Language modeling
- Transfer learning
- Evaluation metrics for image captioning (e.g., BLEU, METEOR, CIDEr)
### The project is implemented in Python and utilizes the following libraries:
- PyTorch
- Transformers
- TorchVision
- NumPy
- NLTK
- Matplotlib
### Introduction
This project uses a transformer [[3]](#3) based model to generate a description
for images. This task is known as the Image Captioning task. Researchers used
many methodologies to approach this problem. One of these methodologies is the
encoder-decoder neural network [4]. The encoder transforms the source image
into a representation space; then, the decoder translates the information from
the encoded space into a natural language. The goal of the encoder-decoder is
to minimize the loss of generating a description from an image.
As shown in the survey done by MD Zakir Hossain et al. [[4]](#4), we can see that the
models that use encoder-decoder architecture mainly consist of a language model
based on LSTM [[5]](#5), which decodes the encoded image received from a CNN, see
Figure 1. The limitation of LSTM with long sequences and the success of
transformers in machine translation and other NLP tasks attracts attention to
utilizing it in machine vision. Alexey Dosovitskiy et al. introduce an image
classification model (ViT) based on a classical transformer encoder showing a
good performance [[6]](#6). Based on ViT, Wei Liu et al. present an image captioning
model (CPTR) using an encoder-decoder transformer [[1]](#1). The source image is fed
to the transformer encoder in sequence patches. Hence, one can treat the image
captioning problem as a machine translation task.

Figure 1: Encoder Decoder Architecture
### Framework
The CPTR [[1]](#1) consists of an image patcher that converts images

to a sequence of patches }),
where _N_ is number of patches, _H_, _W_, _C_ are images height, width and
number of chanel _C=3_ respectively, _P_ is patch resolution, and _E_ is image
embeddings size. Position embeddings are then added to the images patches,
which form the input to twelve layers of identical transformer encoders. The
output of the last encoder layer goes to four layers of identical transformer
decoders. The decoder also takes words with sinusoid positional embedding.
The pre-trained ViT weights initialize the CPTR encoder [[1]](#1). I omitted
the initialization and image positional embeddings, adding an image embedding
module to the image patcher using the features map extracted from the Resnet101
network [[7]](#7). The number of encoder layers is reduced to two. For
Resenet101, I deleted the last two layers and the last softmax layer used for
image classification.
Another modification takes place at the encoder side. The feedforward network
consists of two convolution layers with a RELU activation function in between.
The encoder side deals solely with the image part, where it is beneficial to
exploit the relative position of the features we have. Refer to Figure 2 for
the model architecture.

Figure 2: Model Architecture
### Training
The transformer decoder output goes to one fully connected layer, which
provides –-given the previous token–- a probability distribution
(, *k*
is vocabulary size) for each token in the sequence.
I trained the model using cross-entropy loss given the target ground truth
() where _T_ is the
length of the sequence. Also, I add the doubly stochastic attention
regularization [[8]](#8) to the cross-entropy loss to penalize high weights in
the encoder-decoder attention. This term encourages the summation of attention
weights across the sequence to be approximatively equal to one. By doing so,
the model will not concentrate on specific parts in the image when generating a
caption. Instead, it will look all over the image, leading to a richer and more
descriptive text [[8]](#8).
The loss function is defined as:
\right)\%20+\%20\sum_{l=1}^{L}{\frac{1}{L}\left(\sum_{d=1}^{D}\sum_{i=1}^{P^2}\left(1-\sum_{c=1}^{T}\alpha_{cidl}\right)^2\right)}})
where _D_ is the number of heads and _L_ is the number of layers.
I used Adam optimizer, with a batch size of thirty-two. The reader can find the
model sizes in the configuration file `code/config.json`. Evaluation metrics
used are Bleu [[9]](#9), METEOR [[10]](#10), and Gleu [[11]](#11).
I trained the model for one hundred epochs, with stopping criteria if the
tracked evaluation metric (bleu-4) does not improve for twenty successive
epochs. Also, the learning rate is reduced by 0.25% if the tracked evaluation
metric (bleu-4) does not improve for ten consecutive epochs. The evaluation of
the model against the validation split takes place every two epochs.
The pre-trained Glove embeddings [[12]](#12) initialize the word embedding
weights. The words embeddings are frozen for ten epochs. The Resnet101 network
is tuned from the beginning.
### Inference
A beam search of size five is used to generate a caption for the images in the
test split. The generation starts by feeding the image and the "start of
sentence" special tokens. Then at each iteration, five tokens with the highest
scores are chosen. The generation iteration stops when the "end of sentence" is
generated or the max length limit is reached.
## 5. Steps for Code Explanation
### 1. Data Loading and Preprocessing
- Load Annotations: The code first loads image-caption pairs from the COCO 2017 dataset. It uses JSON files containing images and corresponding captions (captions_train2017.json).
- Pairing Images and Captions: The code then creates a list (img_cap_pairs) that pairs image filenames with their respective captions.
- Dataframe for Captions: It organizes the data in a pandas DataFrame for easier manipulation, including creating a path to each image file.
- Sampling Data: 70,000 image-caption pairs are randomly sampled, making the dataset manageable without needing all data.
### 2. Text Preprocessing
- The code preprocesses captions to prepare them for the model. It lowercases the text, removes punctuation, replaces multiple spaces with single spaces, and adds [start] and [end] tokens, marking the beginning and end of each caption.
### 3. Tokenization
- Vocabulary Setup: A tokenizer (TextVectorization) is created with a vocabulary size of 15,000 words and a maximum token length of 40. It tokenizes captions, transforming them into sequences of integers.
- Saving Vocabulary: The vocabulary is saved to a file so that it can be reused later without retraining.
- Mapping Words to Indexes: word2idx and idx2word are mappings that convert words to indices and vice versa.
### 4. Dataset Preparation
- Image-Caption Mapping: Using a dictionary, each image is mapped to its list of captions. Then, the images are shuffled, and a train-validation split is made (80% for training, 20% for validation).
- Creating TensorFlow Datasets: Using the load_data function, images are resized, preprocessed, and tokenized captions are created as tensors. These tensors are batched for training and validation, improving memory efficiency and allowing parallel processing.
### 5. Data Augmentation
- Basic image augmentations (RandomFlip, RandomRotation, and RandomContrast) are applied to training images to help the model generalize better by learning from slightly altered versions of each image.
### 6. Model Architecture
#### CNN Encoder:
- An InceptionV3 model (pre-trained on ImageNet) is used to process images and extract features, which serve as input to the transformer.
#### Transformer Encoder Layer:
- A TransformerEncoderLayer with multi-head self-attention and normalization layers learns the relationships between image features.
#### Embeddings Layer:
- This layer adds positional embeddings, allowing the model to capture the order of words in captions.
#### Transformer Decoder Layer:
- The TransformerDecoderLayer generates captions. It includes multi-head attention, feedforward neural networks, and dropout to prevent overfitting. Masking ensures that tokens don’t “see” future tokens when predicting the next word.
### 7. Image Captioning Model Class
- The ImageCaptioningModel class wraps the encoder, decoder, and CNN encoder into a unified model for training and inference.
- Loss and Accuracy Calculation: Custom functions track model performance by calculating the loss and accuracy using the tokenized captions and generated predictions.
### 8. Training
- Loss Function: Sparse categorical cross-entropy is used to calculate the difference between predicted and actual tokens, excluding padding tokens.
- Early Stopping: Monitors validation loss to stop training if performance on the validation set stops improving.
- Model Compilation and Training: The model is compiled, optimized, and trained over multiple epochs with early stopping.
### 9. Evaluation and Caption Generation
- The generate_caption function generates a caption for a new image by feeding it through the model. The function iteratively predicts tokens, appending each token to the generated sequence until the [end] token appears.
### 10. Saving the Model
- The model weights are saved to a file (Image_Captioning_Model) to reload the model for future use without retraining.
## 6. Results and Analysis
### Deployed in Hugging Face Spaces and share image captioning service using Gradio
The Hugging Face Space Image Captioning GenAI serves as a user-friendly deployment of an image captioning model, designed to generate descriptive captions for uploaded images. The deployment leverages the Hugging Face Spaces infrastructure, which is ideal for hosting machine learning applications with interactive interfaces.
### Key Features of the Deployment:
- *Web-Based Interaction*: The Space offers an intuitive graphical interface for users to upload images and receive real-time AI-generated captions.
- *Scalability*: Built on Hugging Face’s robust hosting environment, the application ensures smooth operation, accommodating multiple users simultaneously.
- *Efficient Framework*: Likely powered by Gradio, the interface integrates seamlessly with the underlying Transformer-based model, enabling fast inference and visually engaging outputs.
- *Accessibility*: Users do not need any technical knowledge or setup to use the tool—everything is available in-browser.
[Gradio](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer) is a package that allows users to create simple web apps with just a few lines of code. It is essentially used for the same purpose as Streamlight and Flask but is much simpler to utilize. Many types of web interface tools can be selected including sketchpad, text boxes, file upload buttons, webcam, etc. Using these tools to receive various types of data as input, machine learning tasks such as classification and regression can easily be demoed.
You can deploy an interactive version of the image captioning service on your browser by running the following command. Please don't forget to set the `cocoapi_dir` and encoder/decoder model paths to the correct values.
```shell
python gradio_main.py
```
Access the service URL: https://huggingface.co/spaces/premanthcharan/Image_Captioining_GenAI

- A Web- Interface developed using Gradio platform and deployed into HuggingFace Spaces for user interaction

- Caption Generated: a red double decker bus driving down a street
### Model Training
Figure 3 and Figure 4 show the loss and bleu-4 scores during the training and
validation phases. These figures show that the model starts to overfit early
around epoch eight. The bleu-4 score and loss value unimproved after epoch 20.
The reason for overfitting may be due to the following reasons:
1. Not enough training data:
- The CPTR's encoder is initialized by the pre-trained ViT model [[1]](#1). In
the ViT paper, the model performs relatively well when trained on a
large dataset like ImageNet, which has 21 million Images [[6]](#6). In our
case, the model weights are randomly initialized, and we have less than
18.5 K images.
- Typically the dataset split configuration is 113,287, 5,000, and 5,000
images for training, validation, and test based on Karpathy et al.'s work
[[13]](#13). My split has way fewer images in the training dataset and is
based on the 80%, 20%, 20% configuration.
2. The image features learned from Resenet101 are patched to an N patches of
size _P x P_. Such configuration may not be the best design as these
features do not have to represent an image that could be transformed into a
sequence of subgrids. Flatten the Resnet101's features may be a better
design.
3. The pre-trained Resent101 has been tuned from the beginning, unlike the
word embedding layer. The gradient updates during early training stages
where the model does not learn yet may distort the image features of the
Resent101.
4. Unsuitable hyperparameters

### Inference Output
#### Generated Text Length
Figure 5 shows the generated caption's lengths distribution. The Figure
indicates that the model tends to generate shorter captions. The distribution
of the training caption's lengths (left) explains that behavior; the
distribution of the lengths is positively skewed. More specifically, the
maximum caption length generated by the model (21 tokens) accounts for 98.66%
of the lengths in the training set. See “code/experiment.ipynb Section 1.3”.

Figure 5: Generated caption's lengths distribution
## 7. Evaluation Metrics
The table below shows the mean and standard deviation of the performance
metrics across the test dataset. The bleu4 has the highest variation,
suggesting that the performance varies across the dataset. This high variation
is expected as the model training needs improvement, as discussed above. Also,
the distribution of the bleu4 scores over the test set shows that 83.3% of the
scores are less than 0.5. See “code/experiment.ipynb Section 1.4”.
| | bleu1 | bleu2 | bleu3 | bleu4 | gleu | meteor |
| :--- | :----: |:----: |:----: |:----: |:----: |:----: |
|mean ± std | 0.7180 ± 0.17 | 0.5116 ± 0.226 | 0.3791 ± 0.227 | 0.2918 ± 0.215 | 0.2814 ± 0.174 | 0.4975 ± 0.193
### Attention Visualisation
I will examine the last layer of the transformer encoder-decoder attention. The weights are averaged across its heads. Section 1.5 in the notebook "code/experiment.ipynb" shows that the weights contain outliers. I considered weights that far from 99.95% percentile and higher as outliers. The outlier's values are capped to the 99.95% percentile.
Fourteen samples were randomly selected from the test split to be examined. The sample image is superimposed with the attention weights for each generated token. The output is saved in either GIF format (one image for all generated tokens) or png format (one image for each token). All superimposed images are saved under "images/tests". The reader can examine the selected fourteen superimposed images under section 2.0 from the experiments notebook. You need to rerun all cells under Section 2.0. The samples are categorized as follows:
Category 1. two samples that have the highest bleu4= 1.0
Category 2. four samples that have the lowest bleu4 scores
Category 3. two samples that have the low value of bleu4 [up to 0.5]
Category 4. two samples that have bleu4 score= (0.5 - 0.7]
Category 5. two samples that have bleu4 score=(0.7 - 0.8]
Category 6. two samples that have bleu4 score= (0.8 - 1.0)
## 8. References
<a id="1">[1]</a> Liu, W., Chen, S., Guo, L., Zhu, X., & Liu, J. (2021). CPTR:
Full transformer network for image captioning. arXiv preprint
[arXiv:2101.10804](https://arxiv.org/abs/2101.10804).
<a id="2">[2]</a> Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P.,
Ramanan, D., ... & Zitnick, C. L. (2014, September). Microsoft coco: Common
objects in context. In European conference on computer vision (pp. 740-755).
Springer, Cham.
<a id="3">[3]</a> A. Vaswani et al., 'Attention is all you need', Advances in neural
information processing systems, vol. 30, 2017.
<a id="4">[4]</a> M. Z. Hossain, F. Sohel, M. F. Shiratuddin, and H. Laga, 'A Comprehensive
Survey of Deep Learning for Image Captioning', arXiv:1810.04020 [cs, stat],
Oct. 2018, Accessed: Mar. 03, 2022. [Online]. Available:
http://arxiv.org/abs/1810.04020.
<a id="5">[5]</a> S. Hochreiter and J. Schmidhuber, ‘Long short-term memory’, Neural
computation, vol. 9, no. 8, pp. 1735–1780, 1997.
<a id="6">[6]</a> A. Dosovitskiy et al., 'An image is worth 16x16 words: Transformers for
image recognition at scale', arXiv preprint arXiv:2010.11929, 2020.
<a id="7">[7]</a> K. He, X. Zhang, S. Ren, and J. Sun, 'Deep Residual Learning for Image
Recognition', arXiv:1512.03385 [cs], Oct. 2015, Accessed: Mar. 06, 2022.
[Online]. Available: http://arxiv.org/abs/1512.03385.
<a id="8">[8]</a> K. Xu et al., 'Show, Attend and Tell: Neural Image Caption Generation with
Visual Attention', arXiv:1502.03044 [cs], Apr. 2016, Accessed: Mar. 07, 2022.
[Online]. Available: http://arxiv.org/abs/1502.03044.
<a id="9">[9]</a> K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, 'Bleu: a method for
automatic evaluation of machine translation', in Proceedings of the 40th annual
meeting of the Association for Computational Linguistics, 2002, pp. 311–318.
<a id="10">[10]</a> S. Banerjee and A. Lavie, 'METEOR: An automatic metric for MT evaluation
with improved correlation with human judgments', in Proceedings of the acl
workshop on intrinsic and extrinsic evaluation measures for machine translation
and/or summarization, 2005, pp. 65–72.
<a id="11">[11]</a> A. Mutton, M. Dras, S. Wan, and R. Dale, 'GLEU: Automatic evaluation of
sentence-level fluency', in Proceedings of the 45th Annual Meeting of the
Association of Computational Linguistics, 2007, pp. 344–351.
<a id="12">[12]</a> J. Pennington, R. Socher, and C. D. Manning, 'Glove: Global vectors for
word representation', in Proceedings of the 2014 conference on empirical
methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
<a id="13">[13]</a> A. Karpathy and L. Fei-Fei, 'Deep visual-semantic alignments for
generating image descriptions', in Proceedings of the IEEE conference on
computer vision and pattern recognition, 2015, pp. 3128–3137.
<a id="13">[14]</a> Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3156-3164.
<a id="13">[15]</a> Hugging Face Spaces Forum about image captioning model.
https://huggingface.co/docs/transformers/main/en/tasks/image_captioning
<a id="13">[16]</a> QuickStart Guide to GitHub pages
https://docs.github.com/en/pages/quickstart
<a id="13">[17]</a> Microsoft COCO: Common Objects in Context (cs.CV). arXiv:1405.0312 [cs.CV]
https://doi.org/10.48550/arXiv.1405.0312
<a id="13">[18]</a> Show, Attend and Tell: Neural Image Caption Generation with Visual Attention arXiv:1502.03044v3 [cs.LG] 19 Apr 2016 https://doi.org/10.48550/arXiv.1502.03044
<a id="13">[19]</a> Deep Residual Learning for Image Recognition arXiv:1512.03385v1 [cs.CV] 10 Dec 2015
<a id="13">[20]</a> Gradio Quickstart Guide https://www.gradio.app/guides/quickstart
|
mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF
|
mradermacher
| 2024-11-16T01:51:45Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:NESPED-GEN/TinyLlama-text2SQL-schemaReduzido",
"base_model:quantized:NESPED-GEN/TinyLlama-text2SQL-schemaReduzido",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-16T01:48:28Z |
---
base_model: NESPED-GEN/TinyLlama-text2SQL-schemaReduzido
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NESPED-GEN/TinyLlama-text2SQL-schemaReduzido
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-text2SQL-schemaReduzido-GGUF/resolve/main/TinyLlama-text2SQL-schemaReduzido.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
vijay-ravichander/Llama-1B-Summarization-LoRA-MLP-r128-merged
|
vijay-ravichander
| 2024-11-16T01:49:04Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T01:47:48Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cloneQ/my_personal_assistant
|
cloneQ
| 2024-11-16T01:48:12Z | 5 | 0 | null |
[
"pytorch",
"internlm2",
"custom_code",
"zh",
"base_model:internlm/internlm2_5-7b-chat",
"base_model:finetune:internlm/internlm2_5-7b-chat",
"license:apache-2.0",
"region:us"
] | null | 2024-11-15T13:25:34Z |
---
license: apache-2.0
language:
- zh
base_model:
- internlm/internlm2_5-7b-chat
---
|
vijay-ravichander/Llama-1B-Summarization-LoRA-Attn-r128-merged
|
vijay-ravichander
| 2024-11-16T01:40:54Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T01:39:43Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhong-al/x3d
|
zhong-al
| 2024-11-16T01:35:51Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T02:16:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/internlm2_5-7b-chat-GGUF
|
mradermacher
| 2024-11-16T01:14:02Z | 51 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:internlm/internlm2_5-7b-chat",
"base_model:quantized:internlm/internlm2_5-7b-chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T22:45:53Z |
---
base_model: internlm/internlm2_5-7b-chat
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/internlm/internlm2_5-7b-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/internlm2_5-7b-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/internlm2_5-7b-chat-GGUF/resolve/main/internlm2_5-7b-chat.f16.gguf) | f16 | 15.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hc3515/fine-tuned-llama2
|
hc3515
| 2024-11-16T01:04:44Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-16T00:57:51Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF
|
mradermacher
| 2024-11-16T00:55:02Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"base_model:quantized:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-16T00:38:47Z |
---
base_model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/kellemar-DPO-Orca-Distilled-7B-SLERP-GGUF/resolve/main/kellemar-DPO-Orca-Distilled-7B-SLERP.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nicolofelicioni/pythia-1b-sft-hh-normal-4
|
nicolofelicioni
| 2024-11-16T00:48:04Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-26T09:58:07Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mav23/magnum-v4-9b-GGUF
|
mav23
| 2024-11-16T00:24:15Z | 55 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"license:gemma",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-15T23:02:39Z |
---
language:
- en
license: gemma
library_name: transformers
tags:
- chat
pipeline_tag: text-generation
model-index:
- name: magnum-v4-9b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 35.03
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-9b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 33.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-9b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.63
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-9b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.98
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-9b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.65
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-9b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 32.81
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-9b
name: Open LLM Leaderboard
---

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [gemma 2 9b (chatML'ified)](https://huggingface.co/IntervitensInc/gemma-2-9b-chatml).
## Prompting
A typical input would look like this:
```py
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Magnum ChatML"
}
```
</details><br>
<details><summary>instruct template</summary>
```yaml
{
"system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as "!" and "~" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"last_output_sequence": "",
"system_sequence": "<|im_start|>system\n",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": true,
"names_force_groups": true,
"activation_regex": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"system_same_as_user": false,
"last_system_sequence": "",
"name": "Magnum ChatML"
}
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: /workspace/data/gemma-2-9b-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: false
liger_rms_norm: false
liger_swiglu: true
liger_cross_entropy: true
liger_fused_linear_cross_entropy: false
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_16k_llama_v1.1
type: sharegpt
conversation: chatml
- path: NewEden/Claude-Instruct-5K
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: chatml
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
type: sharegpt
conversation: chatml
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: chatml
- path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo_opus_misc_240827
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo_misc_part2
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: false
default_system_message: "You are a helpful assistant that responds to the user."
dataset_prepared_path: /workspace/data/9b-fft-data
val_set_size: 0.0
output_dir: /workspace/data/9b-fft-out
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 9b-Nemo-config-fft
wandb_entity:
wandb_watch:
wandb_name: attempt-01
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
auto_resume_from_checkpoints: true
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.001
fsdp:
fsdp_config:
special_tokens:
pad_token: <pad>
```
</details><br>
## Credits
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_16k_llama_v1.1](https://huggingface.co/datasets/anthracite-org/c2_logs_16k_llama_v1.1)
- [NewEden/Claude-Instruct-5K](https://huggingface.co/datasets/NewEden/Claude-Instruct-5K)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
- [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
- [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
- [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
- [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
## Training
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
...
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_anthracite-org__magnum-v4-9b)
| Metric |Value|
|-------------------|----:|
|Avg. |23.56|
|IFEval (0-Shot) |35.03|
|BBH (3-Shot) |33.27|
|MATH Lvl 5 (4-Shot)|11.63|
|GPQA (0-shot) |12.98|
|MuSR (0-shot) |15.65|
|MMLU-PRO (5-shot) |32.81|
|
mradermacher/CodeGemma-2b-GGUF
|
mradermacher
| 2024-11-16T00:24:03Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"gemma",
"en",
"base_model:TechxGenus/CodeGemma-2b",
"base_model:quantized:TechxGenus/CodeGemma-2b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-11-08T10:44:35Z |
---
base_model: TechxGenus/CodeGemma-2b
language:
- en
library_name: transformers
license: other
license_link: https://ai.google.dev/gemma/terms
license_name: gemma-terms-of-use
quantized_by: mradermacher
tags:
- code
- gemma
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/CodeGemma-2b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CodeGemma-2b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q2_K.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q2_K.gguf) | Q2_K | 2.4 | |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q3_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q3_K_S.gguf) | Q3_K_S | 2.7 | |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q3_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q3_K_M.gguf) | Q3_K_M | 2.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q3_K_L.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q3_K_L.gguf) | Q3_K_L | 3.0 | |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.IQ4_XS.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.IQ4_XS.gguf) | IQ4_XS | 3.1 | |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q4_0_4_4.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q4_0_4_4.gguf) | Q4_0_4_4 | 3.2 | fast on arm, low quality |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q4_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q4_K_S.gguf) | Q4_K_S | 3.2 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q4_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q4_K_M.gguf) | Q4_K_M | 3.4 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q5_K_S.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q5_K_S.gguf) | Q5_K_S | 3.7 | |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q5_K_M.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q5_K_M.gguf) | Q5_K_M | 3.8 | |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q6_K.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q6_K.gguf) | Q6_K | 4.2 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.Q8_0.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.Q8_0.gguf) | Q8_0 | 5.4 | fast, best quality |
| [PART 1](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/CodeGemma-2b.f16.gguf) [PART 2](https://huggingface.co/mradermacher/CodeGemma-2b-GGUF/resolve/main/codegemma-2b.f16.gguf) | f16 | 10.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/emma-500-llama2-7b-GGUF
|
mradermacher
| 2024-11-16T00:15:54Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:MaLA-LM/mala-monolingual-split",
"base_model:MaLA-LM/emma-500-llama2-7b",
"base_model:quantized:MaLA-LM/emma-500-llama2-7b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-13T01:46:18Z |
---
base_model: MaLA-LM/emma-500-llama2-7b
datasets:
- MaLA-LM/mala-monolingual-split
language:
- en
library_name: transformers
license: llama2
no_imatrix: nan detected in blk.31.attn_q.weight
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaLA-LM/emma-500-llama2-7b
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q4_0_4_4.gguf) | Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/emma-500-llama2-7b-GGUF/resolve/main/emma-500-llama2-7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
J-LAB/FluxiIA-Small_Brisa
|
J-LAB
| 2024-11-16T00:13:07Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:JJhooww/Mistral-7B-v0.2-Instruction",
"base_model:finetune:JJhooww/Mistral-7B-v0.2-Instruction",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-16T00:08:31Z |
---
base_model: JJhooww/Mistral-7B-v0.2-Instruction
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** J-LAB
- **License:** apache-2.0
- **Finetuned from model :** JJhooww/Mistral-7B-v0.2-Instruction
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Excalibur-7b-GGUF
|
mradermacher
| 2024-11-16T00:08:18Z | 12 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:InferenceIllusionist/Excalibur-7b",
"base_model:quantized:InferenceIllusionist/Excalibur-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T22:25:46Z |
---
base_model: InferenceIllusionist/Excalibur-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/InferenceIllusionist/Excalibur-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Excalibur-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Excalibur-7b-GGUF/resolve/main/Excalibur-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PaDaS-Lab/arctic-m-bge-small
|
PaDaS-Lab
| 2024-11-15T23:18:08Z | 326 | 3 | null |
[
"safetensors",
"arctic-m-bge-small",
"mteb",
"custom_code",
"arxiv:2407.08275",
"license:mit",
"model-index",
"region:us"
] | null | 2024-11-07T09:01:21Z |
---
model-index:
- name: no_model_name_available
results:
- dataset:
config: default
name: MTEB ArguAna (default)
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: main_score
value: 62.44
- type: map_at_1
value: 37.909
- type: map_at_10
value: 54.071000000000005
- type: map_at_100
value: 54.706999999999994
- type: map_at_1000
value: 54.71
- type: map_at_20
value: 54.61
- type: map_at_3
value: 49.787
- type: map_at_5
value: 52.471999999999994
- type: mrr_at_1
value: 38.54907539118065
- type: mrr_at_10
value: 54.30778522883794
- type: mrr_at_100
value: 54.95058676123675
- type: mrr_at_1000
value: 54.9534745787606
- type: mrr_at_20
value: 54.85371234607436
- type: mrr_at_3
value: 50.023707918444806
- type: mrr_at_5
value: 52.71574205784745
- type: nauc_map_at_1000_diff1
value: 9.700052151236969
- type: nauc_map_at_1000_max
value: -11.480601048675311
- type: nauc_map_at_1000_std
value: -16.80933897048166
- type: nauc_map_at_100_diff1
value: 9.702439916132208
- type: nauc_map_at_100_max
value: -11.477121863613672
- type: nauc_map_at_100_std
value: -16.805809477344237
- type: nauc_map_at_10_diff1
value: 9.55964875147944
- type: nauc_map_at_10_max
value: -11.221604673423611
- type: nauc_map_at_10_std
value: -16.84817138477702
- type: nauc_map_at_1_diff1
value: 13.414379505055546
- type: nauc_map_at_1_max
value: -13.64398031891019
- type: nauc_map_at_1_std
value: -17.823564900618976
- type: nauc_map_at_20_diff1
value: 9.656264829584742
- type: nauc_map_at_20_max
value: -11.402956696331874
- type: nauc_map_at_20_std
value: -16.729584639384093
- type: nauc_map_at_3_diff1
value: 9.074651468472236
- type: nauc_map_at_3_max
value: -11.938799932445345
- type: nauc_map_at_3_std
value: -17.292542932113854
- type: nauc_map_at_5_diff1
value: 9.375988599355505
- type: nauc_map_at_5_max
value: -11.472571205679664
- type: nauc_map_at_5_std
value: -17.40403356468899
- type: nauc_mrr_at_1000_diff1
value: 7.411799940331186
- type: nauc_mrr_at_1000_max
value: -12.508159837494434
- type: nauc_mrr_at_1000_std
value: -16.707342470667285
- type: nauc_mrr_at_100_diff1
value: 7.414405067064217
- type: nauc_mrr_at_100_max
value: -12.50459019538836
- type: nauc_mrr_at_100_std
value: -16.703833468680948
- type: nauc_mrr_at_10_diff1
value: 7.286842407775826
- type: nauc_mrr_at_10_max
value: -12.258550496378401
- type: nauc_mrr_at_10_std
value: -16.731699740418414
- type: nauc_mrr_at_1_diff1
value: 11.596538956075104
- type: nauc_mrr_at_1_max
value: -13.73394271953812
- type: nauc_mrr_at_1_std
value: -17.64007975098422
- type: nauc_mrr_at_20_diff1
value: 7.376312921473681
- type: nauc_mrr_at_20_max
value: -12.426813484836043
- type: nauc_mrr_at_20_std
value: -16.627786497409552
- type: nauc_mrr_at_3_diff1
value: 6.654949505817999
- type: nauc_mrr_at_3_max
value: -13.137022485458507
- type: nauc_mrr_at_3_std
value: -17.32424266610232
- type: nauc_mrr_at_5_diff1
value: 7.2434372901234525
- type: nauc_mrr_at_5_max
value: -12.429947223764405
- type: nauc_mrr_at_5_std
value: -17.228937753898123
- type: nauc_ndcg_at_1000_diff1
value: 9.36855285735971
- type: nauc_ndcg_at_1000_max
value: -10.953666720445836
- type: nauc_ndcg_at_1000_std
value: -16.347516200301456
- type: nauc_ndcg_at_100_diff1
value: 9.409452556684755
- type: nauc_ndcg_at_100_max
value: -10.862168660345734
- type: nauc_ndcg_at_100_std
value: -16.229401930460405
- type: nauc_ndcg_at_10_diff1
value: 8.77691653610156
- type: nauc_ndcg_at_10_max
value: -9.563379218779584
- type: nauc_ndcg_at_10_std
value: -16.274566801403125
- type: nauc_ndcg_at_1_diff1
value: 13.414379505055546
- type: nauc_ndcg_at_1_max
value: -13.64398031891019
- type: nauc_ndcg_at_1_std
value: -17.823564900618976
- type: nauc_ndcg_at_20_diff1
value: 9.131323452637305
- type: nauc_ndcg_at_20_max
value: -10.266530434189066
- type: nauc_ndcg_at_20_std
value: -15.737108435541888
- type: nauc_ndcg_at_3_diff1
value: 7.739062271477399
- type: nauc_ndcg_at_3_max
value: -11.488056154638532
- type: nauc_ndcg_at_3_std
value: -17.411333288529267
- type: nauc_ndcg_at_5_diff1
value: 8.272542020803597
- type: nauc_ndcg_at_5_max
value: -10.397456408544468
- type: nauc_ndcg_at_5_std
value: -17.59822117969101
- type: nauc_precision_at_1000_diff1
value: 13.208924542423258
- type: nauc_precision_at_1000_max
value: 13.208924542423258
- type: nauc_precision_at_1000_std
value: 66.71142287338954
- type: nauc_precision_at_100_diff1
value: 18.762786994282852
- type: nauc_precision_at_100_max
value: 20.099447719178784
- type: nauc_precision_at_100_std
value: 48.431125716899956
- type: nauc_precision_at_10_diff1
value: 4.019933323360742
- type: nauc_precision_at_10_max
value: 4.884910439588258
- type: nauc_precision_at_10_std
value: -11.127362742499441
- type: nauc_precision_at_1_diff1
value: 13.414379505055546
- type: nauc_precision_at_1_max
value: -13.64398031891019
- type: nauc_precision_at_1_std
value: -17.823564900618976
- type: nauc_precision_at_20_diff1
value: 3.6375128143838293
- type: nauc_precision_at_20_max
value: 14.126083805554671
- type: nauc_precision_at_20_std
value: 10.615757350586888
- type: nauc_precision_at_3_diff1
value: 3.3422754903034884
- type: nauc_precision_at_3_max
value: -10.034405870340006
- type: nauc_precision_at_3_std
value: -17.917533977279017
- type: nauc_precision_at_5_diff1
value: 3.7950183183380957
- type: nauc_precision_at_5_max
value: -5.449035408572837
- type: nauc_precision_at_5_std
value: -18.586669848898257
- type: nauc_recall_at_1000_diff1
value: 13.208924542421252
- type: nauc_recall_at_1000_max
value: 13.208924542421252
- type: nauc_recall_at_1000_std
value: 66.71142287338697
- type: nauc_recall_at_100_diff1
value: 18.76278699428332
- type: nauc_recall_at_100_max
value: 20.099447719179743
- type: nauc_recall_at_100_std
value: 48.431125716900205
- type: nauc_recall_at_10_diff1
value: 4.019933323360658
- type: nauc_recall_at_10_max
value: 4.884910439588057
- type: nauc_recall_at_10_std
value: -11.127362742499546
- type: nauc_recall_at_1_diff1
value: 13.414379505055546
- type: nauc_recall_at_1_max
value: -13.64398031891019
- type: nauc_recall_at_1_std
value: -17.823564900618976
- type: nauc_recall_at_20_diff1
value: 3.6375128143838387
- type: nauc_recall_at_20_max
value: 14.126083805554623
- type: nauc_recall_at_20_std
value: 10.61575735058658
- type: nauc_recall_at_3_diff1
value: 3.3422754903035554
- type: nauc_recall_at_3_max
value: -10.034405870339956
- type: nauc_recall_at_3_std
value: -17.917533977278943
- type: nauc_recall_at_5_diff1
value: 3.795018318338047
- type: nauc_recall_at_5_max
value: -5.449035408572804
- type: nauc_recall_at_5_std
value: -18.58666984889819
- type: ndcg_at_1
value: 37.909
- type: ndcg_at_10
value: 62.44
- type: ndcg_at_100
value: 64.932
- type: ndcg_at_1000
value: 64.99000000000001
- type: ndcg_at_20
value: 64.319
- type: ndcg_at_3
value: 53.778000000000006
- type: ndcg_at_5
value: 58.599000000000004
- type: precision_at_1
value: 37.909
- type: precision_at_10
value: 8.883000000000001
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.804
- type: precision_at_3
value: 21.788
- type: precision_at_5
value: 15.405
- type: recall_at_1
value: 37.909
- type: recall_at_10
value: 88.834
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 96.088
- type: recall_at_3
value: 65.363
- type: recall_at_5
value: 77.027
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval (default)
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: main_score
value: 53.176
- type: map_at_1
value: 33.650999999999996
- type: map_at_10
value: 46.471000000000004
- type: map_at_100
value: 47.985
- type: map_at_1000
value: 48.102000000000004
- type: map_at_20
value: 47.292
- type: map_at_3
value: 42.623
- type: map_at_5
value: 44.979
- type: mrr_at_1
value: 41.201716738197426
- type: mrr_at_10
value: 52.25355950677838
- type: mrr_at_100
value: 52.88338300595689
- type: mrr_at_1000
value: 52.921972185432885
- type: mrr_at_20
value: 52.572720245822445
- type: mrr_at_3
value: 49.38006676204101
- type: mrr_at_5
value: 51.368621840724806
- type: nauc_map_at_1000_diff1
value: 52.424580577365674
- type: nauc_map_at_1000_max
value: 35.94853426088666
- type: nauc_map_at_1000_std
value: -3.1129808405979116
- type: nauc_map_at_100_diff1
value: 52.42314269469678
- type: nauc_map_at_100_max
value: 35.95564099324896
- type: nauc_map_at_100_std
value: -3.101625069102785
- type: nauc_map_at_10_diff1
value: 52.674357307094496
- type: nauc_map_at_10_max
value: 35.62082218057774
- type: nauc_map_at_10_std
value: -3.7915962794353173
- type: nauc_map_at_1_diff1
value: 58.88454782432587
- type: nauc_map_at_1_max
value: 31.58887282969742
- type: nauc_map_at_1_std
value: -3.3197840386400843
- type: nauc_map_at_20_diff1
value: 52.57811291835384
- type: nauc_map_at_20_max
value: 35.98370464846043
- type: nauc_map_at_20_std
value: -3.282933904055322
- type: nauc_map_at_3_diff1
value: 53.23139053968499
- type: nauc_map_at_3_max
value: 35.27374020498982
- type: nauc_map_at_3_std
value: -4.586249483195213
- type: nauc_map_at_5_diff1
value: 52.59485178437643
- type: nauc_map_at_5_max
value: 35.514513542685876
- type: nauc_map_at_5_std
value: -4.434526651693118
- type: nauc_mrr_at_1000_diff1
value: 49.59556586828132
- type: nauc_mrr_at_1000_max
value: 36.84616750157751
- type: nauc_mrr_at_1000_std
value: -3.8525984466340764
- type: nauc_mrr_at_100_diff1
value: 49.57531335928693
- type: nauc_mrr_at_100_max
value: 36.82683956190645
- type: nauc_mrr_at_100_std
value: -3.872554481570826
- type: nauc_mrr_at_10_diff1
value: 49.62497265122659
- type: nauc_mrr_at_10_max
value: 36.98985018458424
- type: nauc_mrr_at_10_std
value: -3.8376513272257733
- type: nauc_mrr_at_1_diff1
value: 54.49327345294693
- type: nauc_mrr_at_1_max
value: 34.8934028739382
- type: nauc_mrr_at_1_std
value: -4.437791198183867
- type: nauc_mrr_at_20_diff1
value: 49.5890168206895
- type: nauc_mrr_at_20_max
value: 36.89726798208358
- type: nauc_mrr_at_20_std
value: -3.866993349889667
- type: nauc_mrr_at_3_diff1
value: 49.59634094819107
- type: nauc_mrr_at_3_max
value: 37.16225648718551
- type: nauc_mrr_at_3_std
value: -4.414442576808539
- type: nauc_mrr_at_5_diff1
value: 49.225081579422344
- type: nauc_mrr_at_5_max
value: 36.747751335426756
- type: nauc_mrr_at_5_std
value: -4.324178992210884
- type: nauc_ndcg_at_1000_diff1
value: 50.31882542922762
- type: nauc_ndcg_at_1000_max
value: 36.94417408184034
- type: nauc_ndcg_at_1000_std
value: -1.8041849909913372
- type: nauc_ndcg_at_100_diff1
value: 49.66655309339676
- type: nauc_ndcg_at_100_max
value: 36.70372545075
- type: nauc_ndcg_at_100_std
value: -1.6243834018453231
- type: nauc_ndcg_at_10_diff1
value: 49.940843283397214
- type: nauc_ndcg_at_10_max
value: 36.0676312207537
- type: nauc_ndcg_at_10_std
value: -3.439514885728974
- type: nauc_ndcg_at_1_diff1
value: 54.49327345294693
- type: nauc_ndcg_at_1_max
value: 34.8934028739382
- type: nauc_ndcg_at_1_std
value: -4.437791198183867
- type: nauc_ndcg_at_20_diff1
value: 49.93181052825062
- type: nauc_ndcg_at_20_max
value: 36.71459050402181
- type: nauc_ndcg_at_20_std
value: -2.6921628328410265
- type: nauc_ndcg_at_3_diff1
value: 50.26692043258316
- type: nauc_ndcg_at_3_max
value: 36.24184609760576
- type: nauc_ndcg_at_3_std
value: -4.757874636308119
- type: nauc_ndcg_at_5_diff1
value: 49.37130579587368
- type: nauc_ndcg_at_5_max
value: 35.73812624135239
- type: nauc_ndcg_at_5_std
value: -4.5919788051135555
- type: nauc_precision_at_1000_diff1
value: -24.43795561769816
- type: nauc_precision_at_1000_max
value: -13.261416374377383
- type: nauc_precision_at_1000_std
value: -4.971448949934886
- type: nauc_precision_at_100_diff1
value: -16.883129718999133
- type: nauc_precision_at_100_max
value: -2.46701167013433
- type: nauc_precision_at_100_std
value: 3.277974208302033
- type: nauc_precision_at_10_diff1
value: 6.58192062605803
- type: nauc_precision_at_10_max
value: 17.66130584790626
- type: nauc_precision_at_10_std
value: 1.5300268853781491
- type: nauc_precision_at_1_diff1
value: 54.49327345294693
- type: nauc_precision_at_1_max
value: 34.8934028739382
- type: nauc_precision_at_1_std
value: -4.437791198183867
- type: nauc_precision_at_20_diff1
value: -1.8753425950377052
- type: nauc_precision_at_20_max
value: 12.343845069467402
- type: nauc_precision_at_20_std
value: 4.625866298054727
- type: nauc_precision_at_3_diff1
value: 26.25293210293932
- type: nauc_precision_at_3_max
value: 31.20810752338666
- type: nauc_precision_at_3_std
value: -4.53249841922141
- type: nauc_precision_at_5_diff1
value: 16.615368164537657
- type: nauc_precision_at_5_max
value: 25.232698186133707
- type: nauc_precision_at_5_std
value: -2.663050054635891
- type: nauc_recall_at_1000_diff1
value: 35.83705078359078
- type: nauc_recall_at_1000_max
value: 62.30748246780233
- type: nauc_recall_at_1000_std
value: 63.240763200045805
- type: nauc_recall_at_100_diff1
value: 33.467633455800815
- type: nauc_recall_at_100_max
value: 36.60323449435162
- type: nauc_recall_at_100_std
value: 14.015411684054346
- type: nauc_recall_at_10_diff1
value: 41.42599884119931
- type: nauc_recall_at_10_max
value: 33.20419643286129
- type: nauc_recall_at_10_std
value: -2.159643957172222
- type: nauc_recall_at_1_diff1
value: 58.88454782432587
- type: nauc_recall_at_1_max
value: 31.58887282969742
- type: nauc_recall_at_1_std
value: -3.3197840386400843
- type: nauc_recall_at_20_diff1
value: 40.65866346855011
- type: nauc_recall_at_20_max
value: 35.30555514387619
- type: nauc_recall_at_20_std
value: 0.08694081684299272
- type: nauc_recall_at_3_diff1
value: 46.09760653175857
- type: nauc_recall_at_3_max
value: 34.90824497182377
- type: nauc_recall_at_3_std
value: -5.655059126448061
- type: nauc_recall_at_5_diff1
value: 41.53532865271283
- type: nauc_recall_at_5_max
value: 33.39745163988502
- type: nauc_recall_at_5_std
value: -5.016436615159224
- type: ndcg_at_1
value: 41.202
- type: ndcg_at_10
value: 53.176
- type: ndcg_at_100
value: 58.328
- type: ndcg_at_1000
value: 59.965999999999994
- type: ndcg_at_20
value: 55.008
- type: ndcg_at_3
value: 47.859
- type: ndcg_at_5
value: 50.768
- type: precision_at_1
value: 41.202
- type: precision_at_10
value: 10.186
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.20400000000000001
- type: precision_at_20
value: 5.973
- type: precision_at_3
value: 23.176
- type: precision_at_5
value: 16.881
- type: recall_at_1
value: 33.650999999999996
- type: recall_at_10
value: 65.977
- type: recall_at_100
value: 87.302
- type: recall_at_1000
value: 97.336
- type: recall_at_20
value: 72.294
- type: recall_at_3
value: 50.797000000000004
- type: recall_at_5
value: 58.872
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval (default)
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: main_score
value: 50.92100000000001
- type: map_at_1
value: 33.744
- type: map_at_10
value: 44.815
- type: map_at_100
value: 46.245999999999995
- type: map_at_1000
value: 46.376
- type: map_at_20
value: 45.609
- type: map_at_3
value: 41.531
- type: map_at_5
value: 43.391999999999996
- type: mrr_at_1
value: 42.10191082802548
- type: mrr_at_10
value: 51.08573450611672
- type: mrr_at_100
value: 51.74891122170677
- type: mrr_at_1000
value: 51.78529712995296
- type: mrr_at_20
value: 51.4967715101907
- type: mrr_at_3
value: 48.91719745222933
- type: mrr_at_5
value: 50.28980891719754
- type: nauc_map_at_1000_diff1
value: 55.176936659421294
- type: nauc_map_at_1000_max
value: 36.48371284702768
- type: nauc_map_at_1000_std
value: -2.4447515702989806
- type: nauc_map_at_100_diff1
value: 55.1863019000113
- type: nauc_map_at_100_max
value: 36.43246962553196
- type: nauc_map_at_100_std
value: -2.5450740079709044
- type: nauc_map_at_10_diff1
value: 55.762997970306394
- type: nauc_map_at_10_max
value: 35.380624071909175
- type: nauc_map_at_10_std
value: -4.558912389227884
- type: nauc_map_at_1_diff1
value: 61.0608868067328
- type: nauc_map_at_1_max
value: 29.72548408222947
- type: nauc_map_at_1_std
value: -10.069038170579741
- type: nauc_map_at_20_diff1
value: 55.41603585876044
- type: nauc_map_at_20_max
value: 36.02816334732108
- type: nauc_map_at_20_std
value: -3.3699246431509717
- type: nauc_map_at_3_diff1
value: 56.82908515426453
- type: nauc_map_at_3_max
value: 33.15737676707489
- type: nauc_map_at_3_std
value: -7.378910489256622
- type: nauc_map_at_5_diff1
value: 56.14588532401665
- type: nauc_map_at_5_max
value: 34.414293818549005
- type: nauc_map_at_5_std
value: -6.047619727680526
- type: nauc_mrr_at_1000_diff1
value: 52.56773367624669
- type: nauc_mrr_at_1000_max
value: 39.31200496491635
- type: nauc_mrr_at_1000_std
value: 2.0642958415792685
- type: nauc_mrr_at_100_diff1
value: 52.56372613071439
- type: nauc_mrr_at_100_max
value: 39.3159360559684
- type: nauc_mrr_at_100_std
value: 2.0805091403344997
- type: nauc_mrr_at_10_diff1
value: 52.64975462157789
- type: nauc_mrr_at_10_max
value: 39.208820614240295
- type: nauc_mrr_at_10_std
value: 1.5932304576085854
- type: nauc_mrr_at_1_diff1
value: 56.58854551625778
- type: nauc_mrr_at_1_max
value: 38.83187422216751
- type: nauc_mrr_at_1_std
value: -1.1292455097337009
- type: nauc_mrr_at_20_diff1
value: 52.57378574296517
- type: nauc_mrr_at_20_max
value: 39.33846363894702
- type: nauc_mrr_at_20_std
value: 2.013232706080241
- type: nauc_mrr_at_3_diff1
value: 52.92910407019309
- type: nauc_mrr_at_3_max
value: 38.91108571047644
- type: nauc_mrr_at_3_std
value: 1.067703035548225
- type: nauc_mrr_at_5_diff1
value: 52.636125724089254
- type: nauc_mrr_at_5_max
value: 39.209637006609455
- type: nauc_mrr_at_5_std
value: 1.2426388707039298
- type: nauc_ndcg_at_1000_diff1
value: 52.31111968341887
- type: nauc_ndcg_at_1000_max
value: 38.75742129669778
- type: nauc_ndcg_at_1000_std
value: 3.5536257954775157
- type: nauc_ndcg_at_100_diff1
value: 52.37103775070086
- type: nauc_ndcg_at_100_max
value: 38.753000166661344
- type: nauc_ndcg_at_100_std
value: 3.6667964133015762
- type: nauc_ndcg_at_10_diff1
value: 53.56092641993905
- type: nauc_ndcg_at_10_max
value: 37.62257371918095
- type: nauc_ndcg_at_10_std
value: -0.3933425825827704
- type: nauc_ndcg_at_1_diff1
value: 56.58854551625778
- type: nauc_ndcg_at_1_max
value: 38.83187422216751
- type: nauc_ndcg_at_1_std
value: -1.1292455097337009
- type: nauc_ndcg_at_20_diff1
value: 52.997119382659484
- type: nauc_ndcg_at_20_max
value: 38.41095357471896
- type: nauc_ndcg_at_20_std
value: 1.9075677183444468
- type: nauc_ndcg_at_3_diff1
value: 53.32041550278149
- type: nauc_ndcg_at_3_max
value: 36.54542124064425
- type: nauc_ndcg_at_3_std
value: -2.1268638356088374
- type: nauc_ndcg_at_5_diff1
value: 53.389257836500256
- type: nauc_ndcg_at_5_max
value: 37.307434494043676
- type: nauc_ndcg_at_5_std
value: -1.7664881562750538
- type: nauc_precision_at_1000_diff1
value: -18.061781127353505
- type: nauc_precision_at_1000_max
value: 14.164961693343972
- type: nauc_precision_at_1000_std
value: 32.08207789236699
- type: nauc_precision_at_100_diff1
value: -12.629587588058818
- type: nauc_precision_at_100_max
value: 23.723177704853438
- type: nauc_precision_at_100_std
value: 37.3224630704383
- type: nauc_precision_at_10_diff1
value: 6.0985411491844195
- type: nauc_precision_at_10_max
value: 34.01467623470949
- type: nauc_precision_at_10_std
value: 26.343490397284334
- type: nauc_precision_at_1_diff1
value: 56.58854551625778
- type: nauc_precision_at_1_max
value: 38.83187422216751
- type: nauc_precision_at_1_std
value: -1.1292455097337009
- type: nauc_precision_at_20_diff1
value: -2.905957928684381
- type: nauc_precision_at_20_max
value: 31.591825090757908
- type: nauc_precision_at_20_std
value: 32.989888342109076
- type: nauc_precision_at_3_diff1
value: 27.17928355856029
- type: nauc_precision_at_3_max
value: 37.33885605249689
- type: nauc_precision_at_3_std
value: 12.651453071713059
- type: nauc_precision_at_5_diff1
value: 16.526381349737242
- type: nauc_precision_at_5_max
value: 36.88010744074558
- type: nauc_precision_at_5_std
value: 19.135126725576384
- type: nauc_recall_at_1000_diff1
value: 36.638153528487635
- type: nauc_recall_at_1000_max
value: 45.19430762946925
- type: nauc_recall_at_1000_std
value: 42.57303922365023
- type: nauc_recall_at_100_diff1
value: 40.43544826397977
- type: nauc_recall_at_100_max
value: 40.784066455275706
- type: nauc_recall_at_100_std
value: 27.301271412381144
- type: nauc_recall_at_10_diff1
value: 48.37295419396959
- type: nauc_recall_at_10_max
value: 34.16271996741004
- type: nauc_recall_at_10_std
value: 0.9252807039977983
- type: nauc_recall_at_1_diff1
value: 61.0608868067328
- type: nauc_recall_at_1_max
value: 29.72548408222947
- type: nauc_recall_at_1_std
value: -10.069038170579741
- type: nauc_recall_at_20_diff1
value: 44.94065991142139
- type: nauc_recall_at_20_max
value: 37.603936202852786
- type: nauc_recall_at_20_std
value: 11.60064066504551
- type: nauc_recall_at_3_diff1
value: 51.99741579524252
- type: nauc_recall_at_3_max
value: 31.388906920168104
- type: nauc_recall_at_3_std
value: -6.153653310119753
- type: nauc_recall_at_5_diff1
value: 49.67027790654694
- type: nauc_recall_at_5_max
value: 33.09777021504344
- type: nauc_recall_at_5_std
value: -3.9074095515554643
- type: ndcg_at_1
value: 42.102000000000004
- type: ndcg_at_10
value: 50.92100000000001
- type: ndcg_at_100
value: 55.381
- type: ndcg_at_1000
value: 57.18600000000001
- type: ndcg_at_20
value: 52.778000000000006
- type: ndcg_at_3
value: 46.542
- type: ndcg_at_5
value: 48.681000000000004
- type: precision_at_1
value: 42.102000000000004
- type: precision_at_10
value: 9.745
- type: precision_at_100
value: 1.548
- type: precision_at_1000
value: 0.198
- type: precision_at_20
value: 5.742
- type: precision_at_3
value: 22.695999999999998
- type: precision_at_5
value: 16.14
- type: recall_at_1
value: 33.744
- type: recall_at_10
value: 61.17700000000001
- type: recall_at_100
value: 79.71000000000001
- type: recall_at_1000
value: 91.008
- type: recall_at_20
value: 68.03399999999999
- type: recall_at_3
value: 48.087
- type: recall_at_5
value: 54.142
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval (default)
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: main_score
value: 62.458000000000006
- type: map_at_1
value: 43.839
- type: map_at_10
value: 56.724
- type: map_at_100
value: 57.751
- type: map_at_1000
value: 57.797
- type: map_at_20
value: 57.387
- type: map_at_3
value: 53.494
- type: map_at_5
value: 55.372
- type: mrr_at_1
value: 50.15673981191222
- type: mrr_at_10
value: 60.11456933870735
- type: mrr_at_100
value: 60.76087999656381
- type: mrr_at_1000
value: 60.77978089317033
- type: mrr_at_20
value: 60.55360369120728
- type: mrr_at_3
value: 58.025078369906026
- type: mrr_at_5
value: 59.22257053291546
- type: nauc_map_at_1000_diff1
value: 54.411253174343344
- type: nauc_map_at_1000_max
value: 39.83549610516408
- type: nauc_map_at_1000_std
value: -2.194420641407535
- type: nauc_map_at_100_diff1
value: 54.38831483785624
- type: nauc_map_at_100_max
value: 39.80801320822348
- type: nauc_map_at_100_std
value: -2.1803664698780842
- type: nauc_map_at_10_diff1
value: 54.45604359775012
- type: nauc_map_at_10_max
value: 39.063307413982
- type: nauc_map_at_10_std
value: -3.4236632847098423
- type: nauc_map_at_1_diff1
value: 56.60631395015112
- type: nauc_map_at_1_max
value: 32.467568481080036
- type: nauc_map_at_1_std
value: -5.800399911526891
- type: nauc_map_at_20_diff1
value: 54.370786642447655
- type: nauc_map_at_20_max
value: 39.59321046436977
- type: nauc_map_at_20_std
value: -2.4088559799214813
- type: nauc_map_at_3_diff1
value: 55.49957006713255
- type: nauc_map_at_3_max
value: 37.118764615368356
- type: nauc_map_at_3_std
value: -5.909943937274052
- type: nauc_map_at_5_diff1
value: 54.81041509611971
- type: nauc_map_at_5_max
value: 38.24140182494858
- type: nauc_map_at_5_std
value: -4.509625968871774
- type: nauc_mrr_at_1000_diff1
value: 53.74660770823747
- type: nauc_mrr_at_1000_max
value: 41.361501849395225
- type: nauc_mrr_at_1000_std
value: -0.8127913246616565
- type: nauc_mrr_at_100_diff1
value: 53.737280189706624
- type: nauc_mrr_at_100_max
value: 41.373323086448075
- type: nauc_mrr_at_100_std
value: -0.7945211619535609
- type: nauc_mrr_at_10_diff1
value: 53.60002836781194
- type: nauc_mrr_at_10_max
value: 41.294906284672145
- type: nauc_mrr_at_10_std
value: -1.133159614693189
- type: nauc_mrr_at_1_diff1
value: 55.872003219794344
- type: nauc_mrr_at_1_max
value: 38.42398154139028
- type: nauc_mrr_at_1_std
value: -3.262385266943247
- type: nauc_mrr_at_20_diff1
value: 53.660372497054865
- type: nauc_mrr_at_20_max
value: 41.423640159792335
- type: nauc_mrr_at_20_std
value: -0.6992108032958381
- type: nauc_mrr_at_3_diff1
value: 54.246382328404074
- type: nauc_mrr_at_3_max
value: 41.167575858831476
- type: nauc_mrr_at_3_std
value: -1.9090830671107353
- type: nauc_mrr_at_5_diff1
value: 53.85586718570862
- type: nauc_mrr_at_5_max
value: 40.98294334278317
- type: nauc_mrr_at_5_std
value: -1.7121845127201107
- type: nauc_ndcg_at_1000_diff1
value: 53.37939317348487
- type: nauc_ndcg_at_1000_max
value: 42.25503051093623
- type: nauc_ndcg_at_1000_std
value: 0.9024947979875332
- type: nauc_ndcg_at_100_diff1
value: 53.02194451446347
- type: nauc_ndcg_at_100_max
value: 42.43117968471603
- type: nauc_ndcg_at_100_std
value: 1.6097860371997164
- type: nauc_ndcg_at_10_diff1
value: 52.864882508290044
- type: nauc_ndcg_at_10_max
value: 41.30405029504235
- type: nauc_ndcg_at_10_std
value: -1.1315174337193916
- type: nauc_ndcg_at_1_diff1
value: 55.872003219794344
- type: nauc_ndcg_at_1_max
value: 38.42398154139028
- type: nauc_ndcg_at_1_std
value: -3.262385266943247
- type: nauc_ndcg_at_20_diff1
value: 52.78243804716271
- type: nauc_ndcg_at_20_max
value: 42.200708727692884
- type: nauc_ndcg_at_20_std
value: 1.204386994029969
- type: nauc_ndcg_at_3_diff1
value: 54.134588048680165
- type: nauc_ndcg_at_3_max
value: 39.262737508813956
- type: nauc_ndcg_at_3_std
value: -3.9798145740330866
- type: nauc_ndcg_at_5_diff1
value: 53.43380266993641
- type: nauc_ndcg_at_5_max
value: 40.1700690079209
- type: nauc_ndcg_at_5_std
value: -2.81233830575759
- type: nauc_precision_at_1000_diff1
value: -16.085237050718256
- type: nauc_precision_at_1000_max
value: 21.56903927967793
- type: nauc_precision_at_1000_std
value: 25.163563893770934
- type: nauc_precision_at_100_diff1
value: -13.409177660433013
- type: nauc_precision_at_100_max
value: 26.191889066691694
- type: nauc_precision_at_100_std
value: 30.434449110434343
- type: nauc_precision_at_10_diff1
value: 7.653820392496794
- type: nauc_precision_at_10_max
value: 33.512847797440386
- type: nauc_precision_at_10_std
value: 17.46948584875833
- type: nauc_precision_at_1_diff1
value: 55.872003219794344
- type: nauc_precision_at_1_max
value: 38.42398154139028
- type: nauc_precision_at_1_std
value: -3.262385266943247
- type: nauc_precision_at_20_diff1
value: -1.7882509799446464
- type: nauc_precision_at_20_max
value: 32.667378017254244
- type: nauc_precision_at_20_std
value: 27.51279914879186
- type: nauc_precision_at_3_diff1
value: 30.46461628659826
- type: nauc_precision_at_3_max
value: 37.74901386898987
- type: nauc_precision_at_3_std
value: 2.466674787017699
- type: nauc_precision_at_5_diff1
value: 18.80573985694938
- type: nauc_precision_at_5_max
value: 34.86218095871847
- type: nauc_precision_at_5_std
value: 9.231195357997013
- type: nauc_recall_at_1000_diff1
value: 44.175128440767175
- type: nauc_recall_at_1000_max
value: 72.76306751265861
- type: nauc_recall_at_1000_std
value: 69.72788552092433
- type: nauc_recall_at_100_diff1
value: 39.33252228382757
- type: nauc_recall_at_100_max
value: 55.56135688396655
- type: nauc_recall_at_100_std
value: 37.203018125948766
- type: nauc_recall_at_10_diff1
value: 45.481900144718836
- type: nauc_recall_at_10_max
value: 42.54097511363277
- type: nauc_recall_at_10_std
value: 2.6063345056649796
- type: nauc_recall_at_1_diff1
value: 56.60631395015112
- type: nauc_recall_at_1_max
value: 32.467568481080036
- type: nauc_recall_at_1_std
value: -5.800399911526891
- type: nauc_recall_at_20_diff1
value: 42.76239836038449
- type: nauc_recall_at_20_max
value: 48.446363988908665
- type: nauc_recall_at_20_std
value: 17.640762405916508
- type: nauc_recall_at_3_diff1
value: 51.60470647047845
- type: nauc_recall_at_3_max
value: 37.418467889921224
- type: nauc_recall_at_3_std
value: -6.408088458035488
- type: nauc_recall_at_5_diff1
value: 48.70731792808808
- type: nauc_recall_at_5_max
value: 39.09353288109433
- type: nauc_recall_at_5_std
value: -3.262225734608099
- type: ndcg_at_1
value: 50.157
- type: ndcg_at_10
value: 62.458000000000006
- type: ndcg_at_100
value: 66.27499999999999
- type: ndcg_at_1000
value: 67.11
- type: ndcg_at_20
value: 64.3
- type: ndcg_at_3
value: 57.348
- type: ndcg_at_5
value: 59.870999999999995
- type: precision_at_1
value: 50.157
- type: precision_at_10
value: 9.875
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.527
- type: precision_at_3
value: 25.474999999999998
- type: precision_at_5
value: 17.279
- type: recall_at_1
value: 43.839
- type: recall_at_10
value: 75.94300000000001
- type: recall_at_100
value: 92.036
- type: recall_at_1000
value: 97.848
- type: recall_at_20
value: 82.592
- type: recall_at_3
value: 62.227
- type: recall_at_5
value: 68.443
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval (default)
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: main_score
value: 43.805
- type: map_at_1
value: 29.429
- type: map_at_10
value: 38.708
- type: map_at_100
value: 39.834
- type: map_at_1000
value: 39.896
- type: map_at_20
value: 39.330999999999996
- type: map_at_3
value: 36.02
- type: map_at_5
value: 37.547999999999995
- type: mrr_at_1
value: 31.63841807909605
- type: mrr_at_10
value: 40.82633844498248
- type: mrr_at_100
value: 41.76109003638645
- type: mrr_at_1000
value: 41.8059087475105
- type: mrr_at_20
value: 41.36288532812116
- type: mrr_at_3
value: 38.24858757062146
- type: mrr_at_5
value: 39.717514124293764
- type: nauc_map_at_1000_diff1
value: 45.585812879455524
- type: nauc_map_at_1000_max
value: 31.31175404949036
- type: nauc_map_at_1000_std
value: -0.6688504922328871
- type: nauc_map_at_100_diff1
value: 45.57793192934199
- type: nauc_map_at_100_max
value: 31.31449058161509
- type: nauc_map_at_100_std
value: -0.6711471739699831
- type: nauc_map_at_10_diff1
value: 45.63641283675042
- type: nauc_map_at_10_max
value: 31.34383247627637
- type: nauc_map_at_10_std
value: -0.8969771419071247
- type: nauc_map_at_1_diff1
value: 51.20029025787074
- type: nauc_map_at_1_max
value: 29.29320638697403
- type: nauc_map_at_1_std
value: -4.195575175603184
- type: nauc_map_at_20_diff1
value: 45.50579311311032
- type: nauc_map_at_20_max
value: 31.162777948119203
- type: nauc_map_at_20_std
value: -0.8437520900178488
- type: nauc_map_at_3_diff1
value: 46.69781509400438
- type: nauc_map_at_3_max
value: 30.454657702219357
- type: nauc_map_at_3_std
value: -1.961062011363698
- type: nauc_map_at_5_diff1
value: 46.04910492816806
- type: nauc_map_at_5_max
value: 30.930622367372457
- type: nauc_map_at_5_std
value: -1.3197031926341913
- type: nauc_mrr_at_1000_diff1
value: 45.184418431720836
- type: nauc_mrr_at_1000_max
value: 32.691464662489466
- type: nauc_mrr_at_1000_std
value: 0.8007278440166657
- type: nauc_mrr_at_100_diff1
value: 45.167327620455126
- type: nauc_mrr_at_100_max
value: 32.70344473782206
- type: nauc_mrr_at_100_std
value: 0.8064086841104559
- type: nauc_mrr_at_10_diff1
value: 45.21931014425146
- type: nauc_mrr_at_10_max
value: 32.89922709426894
- type: nauc_mrr_at_10_std
value: 0.726548346036894
- type: nauc_mrr_at_1_diff1
value: 50.32992410650978
- type: nauc_mrr_at_1_max
value: 31.6443297540481
- type: nauc_mrr_at_1_std
value: -2.2413873790433225
- type: nauc_mrr_at_20_diff1
value: 45.113204601824044
- type: nauc_mrr_at_20_max
value: 32.61736305768626
- type: nauc_mrr_at_20_std
value: 0.7278143932053411
- type: nauc_mrr_at_3_diff1
value: 46.240077882820316
- type: nauc_mrr_at_3_max
value: 32.27275303260653
- type: nauc_mrr_at_3_std
value: 0.1282059654192661
- type: nauc_mrr_at_5_diff1
value: 45.58559508660604
- type: nauc_mrr_at_5_max
value: 32.59296526810394
- type: nauc_mrr_at_5_std
value: 0.7874095845402367
- type: nauc_ndcg_at_1000_diff1
value: 43.20858304283118
- type: nauc_ndcg_at_1000_max
value: 32.44654538809174
- type: nauc_ndcg_at_1000_std
value: 1.9808645746749782
- type: nauc_ndcg_at_100_diff1
value: 42.80944482285779
- type: nauc_ndcg_at_100_max
value: 32.63314035546906
- type: nauc_ndcg_at_100_std
value: 2.5177765413154884
- type: nauc_ndcg_at_10_diff1
value: 43.16290325539329
- type: nauc_ndcg_at_10_max
value: 32.61740129429683
- type: nauc_ndcg_at_10_std
value: 1.2892420693179965
- type: nauc_ndcg_at_1_diff1
value: 50.32992410650978
- type: nauc_ndcg_at_1_max
value: 31.6443297540481
- type: nauc_ndcg_at_1_std
value: -2.2413873790433225
- type: nauc_ndcg_at_20_diff1
value: 42.597191894775015
- type: nauc_ndcg_at_20_max
value: 31.751099582584125
- type: nauc_ndcg_at_20_std
value: 1.438787341128167
- type: nauc_ndcg_at_3_diff1
value: 45.425750906136706
- type: nauc_ndcg_at_3_max
value: 31.118153819129173
- type: nauc_ndcg_at_3_std
value: -0.7887794544621397
- type: nauc_ndcg_at_5_diff1
value: 44.24184750204594
- type: nauc_ndcg_at_5_max
value: 31.678340776396162
- type: nauc_ndcg_at_5_std
value: 0.38897464065601617
- type: nauc_precision_at_1000_diff1
value: -9.25461469977963
- type: nauc_precision_at_1000_max
value: 11.546970772317056
- type: nauc_precision_at_1000_std
value: 11.77950666462821
- type: nauc_precision_at_100_diff1
value: 5.325820460767819
- type: nauc_precision_at_100_max
value: 22.610950942174625
- type: nauc_precision_at_100_std
value: 16.210181509270097
- type: nauc_precision_at_10_diff1
value: 26.09126825014653
- type: nauc_precision_at_10_max
value: 35.00999838883753
- type: nauc_precision_at_10_std
value: 9.40564293375869
- type: nauc_precision_at_1_diff1
value: 50.32992410650978
- type: nauc_precision_at_1_max
value: 31.6443297540481
- type: nauc_precision_at_1_std
value: -2.2413873790433225
- type: nauc_precision_at_20_diff1
value: 19.233219692159693
- type: nauc_precision_at_20_max
value: 29.03044299067655
- type: nauc_precision_at_20_std
value: 10.317579302538391
- type: nauc_precision_at_3_diff1
value: 37.364819598304315
- type: nauc_precision_at_3_max
value: 33.379165297552724
- type: nauc_precision_at_3_std
value: 3.424932892620743
- type: nauc_precision_at_5_diff1
value: 32.872702946200945
- type: nauc_precision_at_5_max
value: 34.571450997070706
- type: nauc_precision_at_5_std
value: 7.12035598939766
- type: nauc_recall_at_1000_diff1
value: 11.279997042195749
- type: nauc_recall_at_1000_max
value: 40.44953937460631
- type: nauc_recall_at_1000_std
value: 31.19505726194957
- type: nauc_recall_at_100_diff1
value: 24.15672423727942
- type: nauc_recall_at_100_max
value: 36.814968545741614
- type: nauc_recall_at_100_std
value: 21.50699037479782
- type: nauc_recall_at_10_diff1
value: 34.34584531211266
- type: nauc_recall_at_10_max
value: 34.196420028975375
- type: nauc_recall_at_10_std
value: 6.855963891373787
- type: nauc_recall_at_1_diff1
value: 51.20029025787074
- type: nauc_recall_at_1_max
value: 29.29320638697403
- type: nauc_recall_at_1_std
value: -4.195575175603184
- type: nauc_recall_at_20_diff1
value: 30.313271321859748
- type: nauc_recall_at_20_max
value: 30.019409239750388
- type: nauc_recall_at_20_std
value: 8.01887379774591
- type: nauc_recall_at_3_diff1
value: 41.3611355564578
- type: nauc_recall_at_3_max
value: 30.190666918387272
- type: nauc_recall_at_3_std
value: 0.7366693042344981
- type: nauc_recall_at_5_diff1
value: 38.46041757825592
- type: nauc_recall_at_5_max
value: 31.35545227469271
- type: nauc_recall_at_5_std
value: 3.226901160844341
- type: ndcg_at_1
value: 31.637999999999998
- type: ndcg_at_10
value: 43.805
- type: ndcg_at_100
value: 49.168
- type: ndcg_at_1000
value: 50.77700000000001
- type: ndcg_at_20
value: 45.866
- type: ndcg_at_3
value: 38.608
- type: ndcg_at_5
value: 41.152
- type: precision_at_1
value: 31.637999999999998
- type: precision_at_10
value: 6.61
- type: precision_at_100
value: 0.9809999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_20
value: 3.7800000000000002
- type: precision_at_3
value: 16.195999999999998
- type: precision_at_5
value: 11.209
- type: recall_at_1
value: 29.429
- type: recall_at_10
value: 57.327
- type: recall_at_100
value: 81.74900000000001
- type: recall_at_1000
value: 93.967
- type: recall_at_20
value: 65.01400000000001
- type: recall_at_3
value: 43.472
- type: recall_at_5
value: 49.521
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval (default)
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: main_score
value: 34.63
- type: map_at_1
value: 20.541999999999998
- type: map_at_10
value: 29.121000000000002
- type: map_at_100
value: 30.389
- type: map_at_1000
value: 30.497999999999998
- type: map_at_20
value: 29.787999999999997
- type: map_at_3
value: 26.514
- type: map_at_5
value: 27.723
- type: mrr_at_1
value: 24.62686567164179
- type: mrr_at_10
value: 33.77897220247966
- type: mrr_at_100
value: 34.71645100175941
- type: mrr_at_1000
value: 34.77428365380689
- type: mrr_at_20
value: 34.31909140865809
- type: mrr_at_3
value: 31.281094527363194
- type: mrr_at_5
value: 32.568407960199
- type: nauc_map_at_1000_diff1
value: 31.065597401371054
- type: nauc_map_at_1000_max
value: 22.53058113245784
- type: nauc_map_at_1000_std
value: 3.385336368837248
- type: nauc_map_at_100_diff1
value: 31.066996795756317
- type: nauc_map_at_100_max
value: 22.526621520577233
- type: nauc_map_at_100_std
value: 3.390224080489411
- type: nauc_map_at_10_diff1
value: 30.98735163587709
- type: nauc_map_at_10_max
value: 22.033975223583145
- type: nauc_map_at_10_std
value: 3.037362136271266
- type: nauc_map_at_1_diff1
value: 34.7860915604864
- type: nauc_map_at_1_max
value: 21.990883014000932
- type: nauc_map_at_1_std
value: 3.215046066755989
- type: nauc_map_at_20_diff1
value: 30.95841793371864
- type: nauc_map_at_20_max
value: 22.312212038670587
- type: nauc_map_at_20_std
value: 3.204234721808634
- type: nauc_map_at_3_diff1
value: 31.873464867905415
- type: nauc_map_at_3_max
value: 22.344535220057306
- type: nauc_map_at_3_std
value: 3.037466472476692
- type: nauc_map_at_5_diff1
value: 31.298770866792836
- type: nauc_map_at_5_max
value: 22.02799162331672
- type: nauc_map_at_5_std
value: 2.994008224596537
- type: nauc_mrr_at_1000_diff1
value: 32.58365390317668
- type: nauc_mrr_at_1000_max
value: 24.960504988463303
- type: nauc_mrr_at_1000_std
value: 3.266331629091531
- type: nauc_mrr_at_100_diff1
value: 32.563483708724526
- type: nauc_mrr_at_100_max
value: 24.956287015467943
- type: nauc_mrr_at_100_std
value: 3.270422121157774
- type: nauc_mrr_at_10_diff1
value: 32.65613325350289
- type: nauc_mrr_at_10_max
value: 24.825654782716384
- type: nauc_mrr_at_10_std
value: 3.1340776275891025
- type: nauc_mrr_at_1_diff1
value: 36.55632726985752
- type: nauc_mrr_at_1_max
value: 24.4445917993785
- type: nauc_mrr_at_1_std
value: 2.264391282317747
- type: nauc_mrr_at_20_diff1
value: 32.47925104262513
- type: nauc_mrr_at_20_max
value: 24.89432614603361
- type: nauc_mrr_at_20_std
value: 3.1774200263878054
- type: nauc_mrr_at_3_diff1
value: 33.50322152633588
- type: nauc_mrr_at_3_max
value: 25.199564396471096
- type: nauc_mrr_at_3_std
value: 2.9397581352257345
- type: nauc_mrr_at_5_diff1
value: 32.9982729251397
- type: nauc_mrr_at_5_max
value: 24.890193912899377
- type: nauc_mrr_at_5_std
value: 3.0867452313583623
- type: nauc_ndcg_at_1000_diff1
value: 30.026151364827403
- type: nauc_ndcg_at_1000_max
value: 24.49889088739547
- type: nauc_ndcg_at_1000_std
value: 5.381413285104224
- type: nauc_ndcg_at_100_diff1
value: 29.80539228010773
- type: nauc_ndcg_at_100_max
value: 24.309010907634338
- type: nauc_ndcg_at_100_std
value: 5.232303167670201
- type: nauc_ndcg_at_10_diff1
value: 29.691994838075185
- type: nauc_ndcg_at_10_max
value: 22.67822625590708
- type: nauc_ndcg_at_10_std
value: 3.499987146410407
- type: nauc_ndcg_at_1_diff1
value: 36.55632726985752
- type: nauc_ndcg_at_1_max
value: 24.4445917993785
- type: nauc_ndcg_at_1_std
value: 2.264391282317747
- type: nauc_ndcg_at_20_diff1
value: 29.345854238086844
- type: nauc_ndcg_at_20_max
value: 23.323621216002355
- type: nauc_ndcg_at_20_std
value: 3.9174664108448236
- type: nauc_ndcg_at_3_diff1
value: 31.580762995014105
- type: nauc_ndcg_at_3_max
value: 23.30762843542372
- type: nauc_ndcg_at_3_std
value: 3.0944885327411535
- type: nauc_ndcg_at_5_diff1
value: 30.47041676971102
- type: nauc_ndcg_at_5_max
value: 22.77605457106532
- type: nauc_ndcg_at_5_std
value: 3.3449847079523596
- type: nauc_precision_at_1000_diff1
value: 0.717852604455919
- type: nauc_precision_at_1000_max
value: 3.38068239732633
- type: nauc_precision_at_1000_std
value: 0.13673896630835028
- type: nauc_precision_at_100_diff1
value: 7.401760552752896
- type: nauc_precision_at_100_max
value: 13.294128452575041
- type: nauc_precision_at_100_std
value: 4.65501490276724
- type: nauc_precision_at_10_diff1
value: 19.426577293440936
- type: nauc_precision_at_10_max
value: 18.143059865611235
- type: nauc_precision_at_10_std
value: 3.4033224978068946
- type: nauc_precision_at_1_diff1
value: 36.55632726985752
- type: nauc_precision_at_1_max
value: 24.4445917993785
- type: nauc_precision_at_1_std
value: 2.264391282317747
- type: nauc_precision_at_20_diff1
value: 15.526124347926789
- type: nauc_precision_at_20_max
value: 18.585967204985604
- type: nauc_precision_at_20_std
value: 3.3631487559984836
- type: nauc_precision_at_3_diff1
value: 27.11838946665272
- type: nauc_precision_at_3_max
value: 22.13989357114677
- type: nauc_precision_at_3_std
value: 1.903120042102994
- type: nauc_precision_at_5_diff1
value: 23.35690634122196
- type: nauc_precision_at_5_max
value: 19.585624668123234
- type: nauc_precision_at_5_std
value: 2.1933428786067988
- type: nauc_recall_at_1000_diff1
value: 16.950131691896043
- type: nauc_recall_at_1000_max
value: 39.951723428573956
- type: nauc_recall_at_1000_std
value: 35.28642001796766
- type: nauc_recall_at_100_diff1
value: 21.660771108426637
- type: nauc_recall_at_100_max
value: 27.98817391149549
- type: nauc_recall_at_100_std
value: 15.547143224954521
- type: nauc_recall_at_10_diff1
value: 23.290961405166108
- type: nauc_recall_at_10_max
value: 20.728190074086502
- type: nauc_recall_at_10_std
value: 3.955634752870681
- type: nauc_recall_at_1_diff1
value: 34.7860915604864
- type: nauc_recall_at_1_max
value: 21.990883014000932
- type: nauc_recall_at_1_std
value: 3.215046066755989
- type: nauc_recall_at_20_diff1
value: 21.3100020769249
- type: nauc_recall_at_20_max
value: 22.417233320077408
- type: nauc_recall_at_20_std
value: 5.701968308692029
- type: nauc_recall_at_3_diff1
value: 28.467978075005014
- type: nauc_recall_at_3_max
value: 22.86743332429378
- type: nauc_recall_at_3_std
value: 4.126266767988962
- type: nauc_recall_at_5_diff1
value: 26.085272342534953
- type: nauc_recall_at_5_max
value: 21.547168834265605
- type: nauc_recall_at_5_std
value: 4.230798615841751
- type: ndcg_at_1
value: 24.627
- type: ndcg_at_10
value: 34.63
- type: ndcg_at_100
value: 40.501
- type: ndcg_at_1000
value: 42.925000000000004
- type: ndcg_at_20
value: 36.783
- type: ndcg_at_3
value: 29.784
- type: ndcg_at_5
value: 31.607000000000003
- type: precision_at_1
value: 24.627
- type: precision_at_10
value: 6.306000000000001
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_20
value: 3.762
- type: precision_at_3
value: 14.262
- type: precision_at_5
value: 10.025
- type: recall_at_1
value: 20.541999999999998
- type: recall_at_10
value: 46.805
- type: recall_at_100
value: 72.294
- type: recall_at_1000
value: 89.425
- type: recall_at_20
value: 54.481
- type: recall_at_3
value: 33.15
- type: recall_at_5
value: 37.830999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval (default)
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: main_score
value: 48.897
- type: map_at_1
value: 32.462
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.371
- type: map_at_1000
value: 44.484
- type: map_at_20
value: 43.756
- type: map_at_3
value: 39.762
- type: map_at_5
value: 41.515
- type: mrr_at_1
value: 39.46102021174206
- type: mrr_at_10
value: 48.738637578868556
- type: mrr_at_100
value: 49.62686026413403
- type: mrr_at_1000
value: 49.66868518383456
- type: mrr_at_20
value: 49.25907585537658
- type: mrr_at_3
value: 46.310555020853364
- type: mrr_at_5
value: 47.78312479948663
- type: nauc_map_at_1000_diff1
value: 51.87542801592498
- type: nauc_map_at_1000_max
value: 33.97981571634409
- type: nauc_map_at_1000_std
value: -1.8786792242943482
- type: nauc_map_at_100_diff1
value: 51.85293643969287
- type: nauc_map_at_100_max
value: 33.9428890229597
- type: nauc_map_at_100_std
value: -1.9332474390946115
- type: nauc_map_at_10_diff1
value: 52.02856985184854
- type: nauc_map_at_10_max
value: 33.61198359968745
- type: nauc_map_at_10_std
value: -2.6398128511204884
- type: nauc_map_at_1_diff1
value: 56.74886676878923
- type: nauc_map_at_1_max
value: 30.22917247812168
- type: nauc_map_at_1_std
value: -6.42573662254084
- type: nauc_map_at_20_diff1
value: 51.82428924313089
- type: nauc_map_at_20_max
value: 33.751285311806384
- type: nauc_map_at_20_std
value: -2.3103774320981803
- type: nauc_map_at_3_diff1
value: 51.86255252819861
- type: nauc_map_at_3_max
value: 33.0377584961136
- type: nauc_map_at_3_std
value: -3.2636230519519387
- type: nauc_map_at_5_diff1
value: 52.01515212806803
- type: nauc_map_at_5_max
value: 33.7459062556087
- type: nauc_map_at_5_std
value: -2.693869845552142
- type: nauc_mrr_at_1000_diff1
value: 51.48855418945387
- type: nauc_mrr_at_1000_max
value: 35.27912845548713
- type: nauc_mrr_at_1000_std
value: -0.08726282212006752
- type: nauc_mrr_at_100_diff1
value: 51.48335893173882
- type: nauc_mrr_at_100_max
value: 35.28023925219956
- type: nauc_mrr_at_100_std
value: -0.08619390644755517
- type: nauc_mrr_at_10_diff1
value: 51.52941953883595
- type: nauc_mrr_at_10_max
value: 35.08219573936157
- type: nauc_mrr_at_10_std
value: -0.5918448278251544
- type: nauc_mrr_at_1_diff1
value: 55.31838125779277
- type: nauc_mrr_at_1_max
value: 33.77228714612555
- type: nauc_mrr_at_1_std
value: -1.499292265426672
- type: nauc_mrr_at_20_diff1
value: 51.408259709777646
- type: nauc_mrr_at_20_max
value: 35.162570989755174
- type: nauc_mrr_at_20_std
value: -0.2682578167220845
- type: nauc_mrr_at_3_diff1
value: 51.46574092636792
- type: nauc_mrr_at_3_max
value: 35.811987430657325
- type: nauc_mrr_at_3_std
value: 0.26013601831722494
- type: nauc_mrr_at_5_diff1
value: 51.612013747911526
- type: nauc_mrr_at_5_max
value: 35.650056877501655
- type: nauc_mrr_at_5_std
value: -0.21245093564084463
- type: nauc_ndcg_at_1000_diff1
value: 50.880872461025305
- type: nauc_ndcg_at_1000_max
value: 35.44994521014937
- type: nauc_ndcg_at_1000_std
value: 1.118216393534395
- type: nauc_ndcg_at_100_diff1
value: 50.53466908072639
- type: nauc_ndcg_at_100_max
value: 35.11045555620107
- type: nauc_ndcg_at_100_std
value: 0.8249078981154204
- type: nauc_ndcg_at_10_diff1
value: 50.90734870734591
- type: nauc_ndcg_at_10_max
value: 33.771004172948224
- type: nauc_ndcg_at_10_std
value: -2.1711028069297633
- type: nauc_ndcg_at_1_diff1
value: 55.31838125779277
- type: nauc_ndcg_at_1_max
value: 33.77228714612555
- type: nauc_ndcg_at_1_std
value: -1.499292265426672
- type: nauc_ndcg_at_20_diff1
value: 50.23324800143884
- type: nauc_ndcg_at_20_max
value: 34.07801014616702
- type: nauc_ndcg_at_20_std
value: -1.124681004529109
- type: nauc_ndcg_at_3_diff1
value: 50.25341657253588
- type: nauc_ndcg_at_3_max
value: 34.591139933602335
- type: nauc_ndcg_at_3_std
value: -1.1956710813776108
- type: nauc_ndcg_at_5_diff1
value: 50.80312504204779
- type: nauc_ndcg_at_5_max
value: 34.85042501470775
- type: nauc_ndcg_at_5_std
value: -1.396135873756306
- type: nauc_precision_at_1000_diff1
value: -13.557597583919549
- type: nauc_precision_at_1000_max
value: 2.8147206953918125
- type: nauc_precision_at_1000_std
value: 14.537543538963874
- type: nauc_precision_at_100_diff1
value: -3.987982340720788
- type: nauc_precision_at_100_max
value: 12.028213960584699
- type: nauc_precision_at_100_std
value: 17.715033463695278
- type: nauc_precision_at_10_diff1
value: 18.57698421541843
- type: nauc_precision_at_10_max
value: 24.283366463408097
- type: nauc_precision_at_10_std
value: 9.324420531172114
- type: nauc_precision_at_1_diff1
value: 55.31838125779277
- type: nauc_precision_at_1_max
value: 33.77228714612555
- type: nauc_precision_at_1_std
value: -1.499292265426672
- type: nauc_precision_at_20_diff1
value: 8.944759267836282
- type: nauc_precision_at_20_max
value: 20.721165285655687
- type: nauc_precision_at_20_std
value: 13.176434479597365
- type: nauc_precision_at_3_diff1
value: 32.237083541824376
- type: nauc_precision_at_3_max
value: 32.11555184738906
- type: nauc_precision_at_3_std
value: 7.15349181819355
- type: nauc_precision_at_5_diff1
value: 26.273699865022195
- type: nauc_precision_at_5_max
value: 30.37038723885166
- type: nauc_precision_at_5_std
value: 8.769386986802829
- type: nauc_recall_at_1000_diff1
value: 37.18342037488666
- type: nauc_recall_at_1000_max
value: 51.700120834339295
- type: nauc_recall_at_1000_std
value: 51.25572071492458
- type: nauc_recall_at_100_diff1
value: 38.1032797078489
- type: nauc_recall_at_100_max
value: 35.62651164450783
- type: nauc_recall_at_100_std
value: 16.8247368098434
- type: nauc_recall_at_10_diff1
value: 44.77080899011338
- type: nauc_recall_at_10_max
value: 29.6963695239568
- type: nauc_recall_at_10_std
value: -3.503513207679883
- type: nauc_recall_at_1_diff1
value: 56.74886676878923
- type: nauc_recall_at_1_max
value: 30.22917247812168
- type: nauc_recall_at_1_std
value: -6.42573662254084
- type: nauc_recall_at_20_diff1
value: 40.23275073277284
- type: nauc_recall_at_20_max
value: 29.263920974237713
- type: nauc_recall_at_20_std
value: 0.4276885400977964
- type: nauc_recall_at_3_diff1
value: 46.04199760913928
- type: nauc_recall_at_3_max
value: 32.835175771043346
- type: nauc_recall_at_3_std
value: -2.3805979024363424
- type: nauc_recall_at_5_diff1
value: 45.848157092548504
- type: nauc_recall_at_5_max
value: 33.2265904276858
- type: nauc_recall_at_5_std
value: -2.0965197326580256
- type: ndcg_at_1
value: 39.461
- type: ndcg_at_10
value: 48.897
- type: ndcg_at_100
value: 54.541
- type: ndcg_at_1000
value: 56.371
- type: ndcg_at_20
value: 51.239000000000004
- type: ndcg_at_3
value: 44.129000000000005
- type: ndcg_at_5
value: 46.424
- type: precision_at_1
value: 39.461
- type: precision_at_10
value: 8.758000000000001
- type: precision_at_100
value: 1.3379999999999999
- type: precision_at_1000
value: 0.168
- type: precision_at_20
value: 5.135
- type: precision_at_3
value: 20.852999999999998
- type: precision_at_5
value: 14.649000000000001
- type: recall_at_1
value: 32.462
- type: recall_at_10
value: 60.531
- type: recall_at_100
value: 83.878
- type: recall_at_1000
value: 95.30999999999999
- type: recall_at_20
value: 68.771
- type: recall_at_3
value: 46.916000000000004
- type: recall_at_5
value: 53.09199999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval (default)
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: main_score
value: 45.226
- type: map_at_1
value: 27.887
- type: map_at_10
value: 39.086999999999996
- type: map_at_100
value: 40.477999999999994
- type: map_at_1000
value: 40.585
- type: map_at_20
value: 39.83
- type: map_at_3
value: 35.875
- type: map_at_5
value: 37.695
- type: mrr_at_1
value: 34.817351598173516
- type: mrr_at_10
value: 45.01653439153436
- type: mrr_at_100
value: 45.87242089610603
- type: mrr_at_1000
value: 45.920675520064755
- type: mrr_at_20
value: 45.507374469348
- type: mrr_at_3
value: 42.465753424657514
- type: mrr_at_5
value: 43.97260273972599
- type: nauc_map_at_1000_diff1
value: 43.95170137620123
- type: nauc_map_at_1000_max
value: 37.19129408748076
- type: nauc_map_at_1000_std
value: 7.888925157034662
- type: nauc_map_at_100_diff1
value: 43.9558720789432
- type: nauc_map_at_100_max
value: 37.214429573625246
- type: nauc_map_at_100_std
value: 7.933552664308029
- type: nauc_map_at_10_diff1
value: 44.21929145274994
- type: nauc_map_at_10_max
value: 36.65671027839632
- type: nauc_map_at_10_std
value: 6.982108869364163
- type: nauc_map_at_1_diff1
value: 49.74596478079841
- type: nauc_map_at_1_max
value: 32.56861544149044
- type: nauc_map_at_1_std
value: 1.097128889360163
- type: nauc_map_at_20_diff1
value: 44.104092078784234
- type: nauc_map_at_20_max
value: 36.99566957257224
- type: nauc_map_at_20_std
value: 7.477043291777348
- type: nauc_map_at_3_diff1
value: 44.467213345851086
- type: nauc_map_at_3_max
value: 35.03024865450431
- type: nauc_map_at_3_std
value: 5.06566672879735
- type: nauc_map_at_5_diff1
value: 44.554827534750636
- type: nauc_map_at_5_max
value: 36.31225914769019
- type: nauc_map_at_5_std
value: 6.0593177568412475
- type: nauc_mrr_at_1000_diff1
value: 41.8894252387263
- type: nauc_mrr_at_1000_max
value: 38.73824247221018
- type: nauc_mrr_at_1000_std
value: 10.312822889457024
- type: nauc_mrr_at_100_diff1
value: 41.88062595488504
- type: nauc_mrr_at_100_max
value: 38.74215906747668
- type: nauc_mrr_at_100_std
value: 10.353181155239255
- type: nauc_mrr_at_10_diff1
value: 41.94013647827115
- type: nauc_mrr_at_10_max
value: 38.78288768729759
- type: nauc_mrr_at_10_std
value: 10.090580330580437
- type: nauc_mrr_at_1_diff1
value: 47.56077396895218
- type: nauc_mrr_at_1_max
value: 36.98399403952428
- type: nauc_mrr_at_1_std
value: 6.5721798897773684
- type: nauc_mrr_at_20_diff1
value: 41.89386639716785
- type: nauc_mrr_at_20_max
value: 38.68491067215507
- type: nauc_mrr_at_20_std
value: 10.182838094619267
- type: nauc_mrr_at_3_diff1
value: 42.01969733662613
- type: nauc_mrr_at_3_max
value: 37.800805484199444
- type: nauc_mrr_at_3_std
value: 9.483998874247575
- type: nauc_mrr_at_5_diff1
value: 41.65309923696901
- type: nauc_mrr_at_5_max
value: 38.54063168917584
- type: nauc_mrr_at_5_std
value: 9.673479912636347
- type: nauc_ndcg_at_1000_diff1
value: 41.47176832694651
- type: nauc_ndcg_at_1000_max
value: 39.169786971026255
- type: nauc_ndcg_at_1000_std
value: 11.679974828658501
- type: nauc_ndcg_at_100_diff1
value: 41.222156890249764
- type: nauc_ndcg_at_100_max
value: 39.53250258278856
- type: nauc_ndcg_at_100_std
value: 12.933003811182312
- type: nauc_ndcg_at_10_diff1
value: 42.0337725964669
- type: nauc_ndcg_at_10_max
value: 38.273909940579124
- type: nauc_ndcg_at_10_std
value: 9.593414260430325
- type: nauc_ndcg_at_1_diff1
value: 47.56077396895218
- type: nauc_ndcg_at_1_max
value: 36.98399403952428
- type: nauc_ndcg_at_1_std
value: 6.5721798897773684
- type: nauc_ndcg_at_20_diff1
value: 41.85575848899653
- type: nauc_ndcg_at_20_max
value: 38.82160272309426
- type: nauc_ndcg_at_20_std
value: 10.794229083924927
- type: nauc_ndcg_at_3_diff1
value: 41.65599882159262
- type: nauc_ndcg_at_3_max
value: 36.15866038270778
- type: nauc_ndcg_at_3_std
value: 7.748508197949587
- type: nauc_ndcg_at_5_diff1
value: 42.28410633684388
- type: nauc_ndcg_at_5_max
value: 37.74519017293837
- type: nauc_ndcg_at_5_std
value: 8.061749452741854
- type: nauc_precision_at_1000_diff1
value: -13.371472140934939
- type: nauc_precision_at_1000_max
value: -1.9535541625334698
- type: nauc_precision_at_1000_std
value: 8.618739674058643
- type: nauc_precision_at_100_diff1
value: -5.44331936385817
- type: nauc_precision_at_100_max
value: 15.019947345639547
- type: nauc_precision_at_100_std
value: 23.080372230077405
- type: nauc_precision_at_10_diff1
value: 15.445549733621986
- type: nauc_precision_at_10_max
value: 30.89290049169744
- type: nauc_precision_at_10_std
value: 20.002890083398132
- type: nauc_precision_at_1_diff1
value: 47.56077396895218
- type: nauc_precision_at_1_max
value: 36.98399403952428
- type: nauc_precision_at_1_std
value: 6.5721798897773684
- type: nauc_precision_at_20_diff1
value: 8.623105688967403
- type: nauc_precision_at_20_max
value: 26.91178852977823
- type: nauc_precision_at_20_std
value: 22.17285887384737
- type: nauc_precision_at_3_diff1
value: 26.381468882549814
- type: nauc_precision_at_3_max
value: 35.90410043864788
- type: nauc_precision_at_3_std
value: 16.101145360947154
- type: nauc_precision_at_5_diff1
value: 22.842829661572875
- type: nauc_precision_at_5_max
value: 35.92997099694966
- type: nauc_precision_at_5_std
value: 18.18378930746855
- type: nauc_recall_at_1000_diff1
value: 13.266400124330257
- type: nauc_recall_at_1000_max
value: 58.21247340815739
- type: nauc_recall_at_1000_std
value: 57.31393380709915
- type: nauc_recall_at_100_diff1
value: 25.95593534295009
- type: nauc_recall_at_100_max
value: 45.03843584939201
- type: nauc_recall_at_100_std
value: 38.100799360138765
- type: nauc_recall_at_10_diff1
value: 34.789715559053604
- type: nauc_recall_at_10_max
value: 38.042187250662884
- type: nauc_recall_at_10_std
value: 13.229947908309544
- type: nauc_recall_at_1_diff1
value: 49.74596478079841
- type: nauc_recall_at_1_max
value: 32.56861544149044
- type: nauc_recall_at_1_std
value: 1.097128889360163
- type: nauc_recall_at_20_diff1
value: 33.384723599926446
- type: nauc_recall_at_20_max
value: 39.15835336776037
- type: nauc_recall_at_20_std
value: 17.52735115682057
- type: nauc_recall_at_3_diff1
value: 37.99962076163248
- type: nauc_recall_at_3_max
value: 33.51343167685077
- type: nauc_recall_at_3_std
value: 6.783531552157573
- type: nauc_recall_at_5_diff1
value: 37.02597430521191
- type: nauc_recall_at_5_max
value: 36.8381283963646
- type: nauc_recall_at_5_std
value: 8.407347972075284
- type: ndcg_at_1
value: 34.817
- type: ndcg_at_10
value: 45.226
- type: ndcg_at_100
value: 50.913
- type: ndcg_at_1000
value: 52.943
- type: ndcg_at_20
value: 47.367
- type: ndcg_at_3
value: 40.332
- type: ndcg_at_5
value: 42.555
- type: precision_at_1
value: 34.817
- type: precision_at_10
value: 8.322000000000001
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 4.869
- type: precision_at_3
value: 19.559
- type: precision_at_5
value: 13.79
- type: recall_at_1
value: 27.887
- type: recall_at_10
value: 57.523
- type: recall_at_100
value: 81.853
- type: recall_at_1000
value: 95.36200000000001
- type: recall_at_20
value: 65.069
- type: recall_at_3
value: 43.342000000000006
- type: recall_at_5
value: 49.596000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval (default)
revision: CQADupstackRetrieval_is_a_combined_dataset
split: test
type: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 44.86833333333333
- type: ndcg_at_10
value: 44.86833333333333
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval (default)
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: main_score
value: 39.654
- type: map_at_1
value: 28.488999999999997
- type: map_at_10
value: 35.621
- type: map_at_100
value: 36.662
- type: map_at_1000
value: 36.754
- type: map_at_20
value: 36.215
- type: map_at_3
value: 33.689
- type: map_at_5
value: 34.733999999999995
- type: mrr_at_1
value: 31.901840490797547
- type: mrr_at_10
value: 38.76101616515727
- type: mrr_at_100
value: 39.6328900317746
- type: mrr_at_1000
value: 39.69875777929701
- type: mrr_at_20
value: 39.27824740202471
- type: mrr_at_3
value: 37.11656441717794
- type: mrr_at_5
value: 38.090490797546025
- type: nauc_map_at_1000_diff1
value: 44.60417734115683
- type: nauc_map_at_1000_max
value: 40.97869080753014
- type: nauc_map_at_1000_std
value: 5.748743395996931
- type: nauc_map_at_100_diff1
value: 44.57736501620202
- type: nauc_map_at_100_max
value: 40.97420581082456
- type: nauc_map_at_100_std
value: 5.762383589620662
- type: nauc_map_at_10_diff1
value: 44.92204225912857
- type: nauc_map_at_10_max
value: 40.675386978230904
- type: nauc_map_at_10_std
value: 5.245272300708162
- type: nauc_map_at_1_diff1
value: 51.03525578589323
- type: nauc_map_at_1_max
value: 39.02148856903404
- type: nauc_map_at_1_std
value: 0.4146617412031749
- type: nauc_map_at_20_diff1
value: 44.58262404664568
- type: nauc_map_at_20_max
value: 40.77381417315517
- type: nauc_map_at_20_std
value: 5.530849792503221
- type: nauc_map_at_3_diff1
value: 45.930245969820646
- type: nauc_map_at_3_max
value: 40.436169462774416
- type: nauc_map_at_3_std
value: 3.3879829560660895
- type: nauc_map_at_5_diff1
value: 45.17424281922756
- type: nauc_map_at_5_max
value: 40.47857337528189
- type: nauc_map_at_5_std
value: 4.414695304860574
- type: nauc_mrr_at_1000_diff1
value: 44.08694838852825
- type: nauc_mrr_at_1000_max
value: 42.42348869902589
- type: nauc_mrr_at_1000_std
value: 7.942150916764917
- type: nauc_mrr_at_100_diff1
value: 44.04467099375857
- type: nauc_mrr_at_100_max
value: 42.43605871354086
- type: nauc_mrr_at_100_std
value: 7.956534359718217
- type: nauc_mrr_at_10_diff1
value: 44.266216857247684
- type: nauc_mrr_at_10_max
value: 42.30356366796194
- type: nauc_mrr_at_10_std
value: 7.644077273142069
- type: nauc_mrr_at_1_diff1
value: 50.221648566432464
- type: nauc_mrr_at_1_max
value: 41.235095557704646
- type: nauc_mrr_at_1_std
value: 3.7408348785402556
- type: nauc_mrr_at_20_diff1
value: 44.05821823838852
- type: nauc_mrr_at_20_max
value: 42.42933700317326
- type: nauc_mrr_at_20_std
value: 7.8665259168401445
- type: nauc_mrr_at_3_diff1
value: 45.03683838233249
- type: nauc_mrr_at_3_max
value: 42.24769488191134
- type: nauc_mrr_at_3_std
value: 6.601038869035635
- type: nauc_mrr_at_5_diff1
value: 44.201862019181455
- type: nauc_mrr_at_5_max
value: 42.07946832877691
- type: nauc_mrr_at_5_std
value: 7.189671715715843
- type: nauc_ndcg_at_1000_diff1
value: 42.42699854748652
- type: nauc_ndcg_at_1000_max
value: 42.43824947781245
- type: nauc_ndcg_at_1000_std
value: 9.67675385214925
- type: nauc_ndcg_at_100_diff1
value: 41.51922844841962
- type: nauc_ndcg_at_100_max
value: 42.61282487350817
- type: nauc_ndcg_at_100_std
value: 10.25445083001239
- type: nauc_ndcg_at_10_diff1
value: 42.574630501270825
- type: nauc_ndcg_at_10_max
value: 41.14145061750566
- type: nauc_ndcg_at_10_std
value: 7.647757048969349
- type: nauc_ndcg_at_1_diff1
value: 50.221648566432464
- type: nauc_ndcg_at_1_max
value: 41.235095557704646
- type: nauc_ndcg_at_1_std
value: 3.7408348785402556
- type: nauc_ndcg_at_20_diff1
value: 41.600087618079066
- type: nauc_ndcg_at_20_max
value: 41.491254134292376
- type: nauc_ndcg_at_20_std
value: 8.596229791444
- type: nauc_ndcg_at_3_diff1
value: 43.82522265410307
- type: nauc_ndcg_at_3_max
value: 41.10083488299727
- type: nauc_ndcg_at_3_std
value: 5.098425173217254
- type: nauc_ndcg_at_5_diff1
value: 42.72862798064444
- type: nauc_ndcg_at_5_max
value: 40.85829769060509
- type: nauc_ndcg_at_5_std
value: 6.31424002071968
- type: nauc_precision_at_1000_diff1
value: -2.0820534872545116
- type: nauc_precision_at_1000_max
value: 16.298683462791594
- type: nauc_precision_at_1000_std
value: 16.97189734146589
- type: nauc_precision_at_100_diff1
value: 6.4514456279287105
- type: nauc_precision_at_100_max
value: 30.968130476508765
- type: nauc_precision_at_100_std
value: 24.590810752136445
- type: nauc_precision_at_10_diff1
value: 23.83061356229352
- type: nauc_precision_at_10_max
value: 37.44657709667713
- type: nauc_precision_at_10_std
value: 18.3818856475441
- type: nauc_precision_at_1_diff1
value: 50.221648566432464
- type: nauc_precision_at_1_max
value: 41.235095557704646
- type: nauc_precision_at_1_std
value: 3.7408348785402556
- type: nauc_precision_at_20_diff1
value: 16.8100155001696
- type: nauc_precision_at_20_max
value: 35.019447938152055
- type: nauc_precision_at_20_std
value: 20.67504386650297
- type: nauc_precision_at_3_diff1
value: 35.33999854814717
- type: nauc_precision_at_3_max
value: 42.464592248955334
- type: nauc_precision_at_3_std
value: 11.735324415513306
- type: nauc_precision_at_5_diff1
value: 29.095637605444765
- type: nauc_precision_at_5_max
value: 40.80816684911544
- type: nauc_precision_at_5_std
value: 15.54403823719892
- type: nauc_recall_at_1000_diff1
value: 30.88859886501841
- type: nauc_recall_at_1000_max
value: 47.675952718888595
- type: nauc_recall_at_1000_std
value: 37.808899612070284
- type: nauc_recall_at_100_diff1
value: 27.102674231258376
- type: nauc_recall_at_100_max
value: 46.24207104250558
- type: nauc_recall_at_100_std
value: 29.033516460715735
- type: nauc_recall_at_10_diff1
value: 35.626332465234064
- type: nauc_recall_at_10_max
value: 39.7007789760367
- type: nauc_recall_at_10_std
value: 12.129960491073899
- type: nauc_recall_at_1_diff1
value: 51.03525578589323
- type: nauc_recall_at_1_max
value: 39.02148856903404
- type: nauc_recall_at_1_std
value: 0.4146617412031749
- type: nauc_recall_at_20_diff1
value: 31.088505920845705
- type: nauc_recall_at_20_max
value: 40.09779003608529
- type: nauc_recall_at_20_std
value: 15.383713495321466
- type: nauc_recall_at_3_diff1
value: 39.376987315291004
- type: nauc_recall_at_3_max
value: 39.579665630711865
- type: nauc_recall_at_3_std
value: 5.903646172290545
- type: nauc_recall_at_5_diff1
value: 36.374552126907986
- type: nauc_recall_at_5_max
value: 39.01714515551238
- type: nauc_recall_at_5_std
value: 8.765416107748178
- type: ndcg_at_1
value: 31.902
- type: ndcg_at_10
value: 39.654
- type: ndcg_at_100
value: 44.667
- type: ndcg_at_1000
value: 47.038999999999994
- type: ndcg_at_20
value: 41.619
- type: ndcg_at_3
value: 36.317
- type: ndcg_at_5
value: 37.887
- type: precision_at_1
value: 31.902
- type: precision_at_10
value: 5.997
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 3.489
- type: precision_at_3
value: 15.286
- type: precision_at_5
value: 10.306999999999999
- type: recall_at_1
value: 28.488999999999997
- type: recall_at_10
value: 48.684
- type: recall_at_100
value: 71.572
- type: recall_at_1000
value: 89.059
- type: recall_at_20
value: 56.089999999999996
- type: recall_at_3
value: 39.42
- type: recall_at_5
value: 43.461
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval (default)
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: main_score
value: 32.957
- type: map_at_1
value: 19.61
- type: map_at_10
value: 27.816999999999997
- type: map_at_100
value: 29.037000000000003
- type: map_at_1000
value: 29.164
- type: map_at_20
value: 28.48
- type: map_at_3
value: 25.212
- type: map_at_5
value: 26.552999999999997
- type: mrr_at_1
value: 23.675154852030282
- type: mrr_at_10
value: 31.855588874687356
- type: mrr_at_100
value: 32.82754708152588
- type: mrr_at_1000
value: 32.899811984634525
- type: mrr_at_20
value: 32.41521823340382
- type: mrr_at_3
value: 29.553796742372164
- type: mrr_at_5
value: 30.799495297086587
- type: nauc_map_at_1000_diff1
value: 37.067009963494954
- type: nauc_map_at_1000_max
value: 29.319194409596722
- type: nauc_map_at_1000_std
value: 0.9381129561343189
- type: nauc_map_at_100_diff1
value: 37.02118730103881
- type: nauc_map_at_100_max
value: 29.308885900656872
- type: nauc_map_at_100_std
value: 0.9305359416352115
- type: nauc_map_at_10_diff1
value: 37.055079813792894
- type: nauc_map_at_10_max
value: 29.115677528784456
- type: nauc_map_at_10_std
value: 0.47079061336618017
- type: nauc_map_at_1_diff1
value: 43.59374607558271
- type: nauc_map_at_1_max
value: 27.502697897665936
- type: nauc_map_at_1_std
value: -0.7674781552217746
- type: nauc_map_at_20_diff1
value: 37.08280714662923
- type: nauc_map_at_20_max
value: 29.214420781305805
- type: nauc_map_at_20_std
value: 0.7207141923408105
- type: nauc_map_at_3_diff1
value: 38.12508979586986
- type: nauc_map_at_3_max
value: 28.64334196655506
- type: nauc_map_at_3_std
value: -0.3639494958439447
- type: nauc_map_at_5_diff1
value: 37.391645974882024
- type: nauc_map_at_5_max
value: 28.973156260444533
- type: nauc_map_at_5_std
value: -0.026789953157566142
- type: nauc_mrr_at_1000_diff1
value: 37.08768410345192
- type: nauc_mrr_at_1000_max
value: 30.226139008765173
- type: nauc_mrr_at_1000_std
value: 0.9258173149071044
- type: nauc_mrr_at_100_diff1
value: 37.06958335624731
- type: nauc_mrr_at_100_max
value: 30.229943564905703
- type: nauc_mrr_at_100_std
value: 0.932361149242787
- type: nauc_mrr_at_10_diff1
value: 37.0206077048578
- type: nauc_mrr_at_10_max
value: 30.158443599717195
- type: nauc_mrr_at_10_std
value: 0.5492249230345497
- type: nauc_mrr_at_1_diff1
value: 42.978918552672035
- type: nauc_mrr_at_1_max
value: 29.114319394090987
- type: nauc_mrr_at_1_std
value: -0.7624439199673105
- type: nauc_mrr_at_20_diff1
value: 37.057384418223485
- type: nauc_mrr_at_20_max
value: 30.171076020906597
- type: nauc_mrr_at_20_std
value: 0.7891456760838766
- type: nauc_mrr_at_3_diff1
value: 37.78963260621373
- type: nauc_mrr_at_3_max
value: 30.057936692440613
- type: nauc_mrr_at_3_std
value: -0.2723394617050784
- type: nauc_mrr_at_5_diff1
value: 37.428672595130074
- type: nauc_mrr_at_5_max
value: 30.21732196933017
- type: nauc_mrr_at_5_std
value: 0.046615676950734625
- type: nauc_ndcg_at_1000_diff1
value: 34.910684324371516
- type: nauc_ndcg_at_1000_max
value: 30.43187052799894
- type: nauc_ndcg_at_1000_std
value: 3.7886613934368976
- type: nauc_ndcg_at_100_diff1
value: 34.435496295156035
- type: nauc_ndcg_at_100_max
value: 30.3229405609203
- type: nauc_ndcg_at_100_std
value: 3.837221374981068
- type: nauc_ndcg_at_10_diff1
value: 34.84989431829001
- type: nauc_ndcg_at_10_max
value: 29.56612074818309
- type: nauc_ndcg_at_10_std
value: 1.3497668647221701
- type: nauc_ndcg_at_1_diff1
value: 42.978918552672035
- type: nauc_ndcg_at_1_max
value: 29.114319394090987
- type: nauc_ndcg_at_1_std
value: -0.7624439199673105
- type: nauc_ndcg_at_20_diff1
value: 34.85666256341009
- type: nauc_ndcg_at_20_max
value: 29.749817141122936
- type: nauc_ndcg_at_20_std
value: 2.2719371477731314
- type: nauc_ndcg_at_3_diff1
value: 36.47550623795379
- type: nauc_ndcg_at_3_max
value: 29.18024982921919
- type: nauc_ndcg_at_3_std
value: -0.5158571946638861
- type: nauc_ndcg_at_5_diff1
value: 35.66325406382566
- type: nauc_ndcg_at_5_max
value: 29.52486267505514
- type: nauc_ndcg_at_5_std
value: 0.1446834436782509
- type: nauc_precision_at_1000_diff1
value: 5.179309010526755
- type: nauc_precision_at_1000_max
value: 9.078351835753596
- type: nauc_precision_at_1000_std
value: 1.0888951899790398
- type: nauc_precision_at_100_diff1
value: 11.746442333432986
- type: nauc_precision_at_100_max
value: 18.328100169309472
- type: nauc_precision_at_100_std
value: 6.488315017239334
- type: nauc_precision_at_10_diff1
value: 21.225993531448843
- type: nauc_precision_at_10_max
value: 26.786229561182516
- type: nauc_precision_at_10_std
value: 3.1118485436954697
- type: nauc_precision_at_1_diff1
value: 42.978918552672035
- type: nauc_precision_at_1_max
value: 29.114319394090987
- type: nauc_precision_at_1_std
value: -0.7624439199673105
- type: nauc_precision_at_20_diff1
value: 18.36569388308726
- type: nauc_precision_at_20_max
value: 24.567477667257474
- type: nauc_precision_at_20_std
value: 4.650751092711225
- type: nauc_precision_at_3_diff1
value: 29.268806480620423
- type: nauc_precision_at_3_max
value: 29.83598747609324
- type: nauc_precision_at_3_std
value: -0.4949630951452181
- type: nauc_precision_at_5_diff1
value: 25.82678700262483
- type: nauc_precision_at_5_max
value: 29.633692602172523
- type: nauc_precision_at_5_std
value: 0.3502444708980338
- type: nauc_recall_at_1000_diff1
value: 14.762867599197998
- type: nauc_recall_at_1000_max
value: 33.77703013085514
- type: nauc_recall_at_1000_std
value: 32.6887608409825
- type: nauc_recall_at_100_diff1
value: 21.717683611413836
- type: nauc_recall_at_100_max
value: 30.34761714689701
- type: nauc_recall_at_100_std
value: 17.14217507105933
- type: nauc_recall_at_10_diff1
value: 27.011051446233097
- type: nauc_recall_at_10_max
value: 28.011038995610356
- type: nauc_recall_at_10_std
value: 3.886680866597647
- type: nauc_recall_at_1_diff1
value: 43.59374607558271
- type: nauc_recall_at_1_max
value: 27.502697897665936
- type: nauc_recall_at_1_std
value: -0.7674781552217746
- type: nauc_recall_at_20_diff1
value: 26.40508046848651
- type: nauc_recall_at_20_max
value: 27.948123862879175
- type: nauc_recall_at_20_std
value: 7.068531738030853
- type: nauc_recall_at_3_diff1
value: 31.750498628363722
- type: nauc_recall_at_3_max
value: 28.059646483159213
- type: nauc_recall_at_3_std
value: 0.14742455169624066
- type: nauc_recall_at_5_diff1
value: 29.76053437646529
- type: nauc_recall_at_5_max
value: 28.594754498676544
- type: nauc_recall_at_5_std
value: 1.2832203560417643
- type: ndcg_at_1
value: 23.674999999999997
- type: ndcg_at_10
value: 32.957
- type: ndcg_at_100
value: 38.584
- type: ndcg_at_1000
value: 41.359
- type: ndcg_at_20
value: 35.093999999999994
- type: ndcg_at_3
value: 28.354000000000003
- type: ndcg_at_5
value: 30.305
- type: precision_at_1
value: 23.674999999999997
- type: precision_at_10
value: 6.077
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 3.665
- type: precision_at_3
value: 13.443
- type: precision_at_5
value: 9.600999999999999
- type: recall_at_1
value: 19.61
- type: recall_at_10
value: 44.263000000000005
- type: recall_at_100
value: 69.41199999999999
- type: recall_at_1000
value: 88.994
- type: recall_at_20
value: 52.198
- type: recall_at_3
value: 31.293
- type: recall_at_5
value: 36.415
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval (default)
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: main_score
value: 45.958
- type: map_at_1
value: 30.048000000000002
- type: map_at_10
value: 40.239000000000004
- type: map_at_100
value: 41.493
- type: map_at_1000
value: 41.582
- type: map_at_20
value: 40.955000000000005
- type: map_at_3
value: 37.097
- type: map_at_5
value: 38.824
- type: mrr_at_1
value: 35.07462686567165
- type: mrr_at_10
value: 44.19283789386398
- type: mrr_at_100
value: 45.08036630404521
- type: mrr_at_1000
value: 45.12183896199538
- type: mrr_at_20
value: 44.72186969518418
- type: mrr_at_3
value: 41.588930348258664
- type: mrr_at_5
value: 42.91355721393029
- type: nauc_map_at_1000_diff1
value: 48.76811649208976
- type: nauc_map_at_1000_max
value: 36.982550067325484
- type: nauc_map_at_1000_std
value: -0.5290701509883527
- type: nauc_map_at_100_diff1
value: 48.78866361951362
- type: nauc_map_at_100_max
value: 36.99605340092298
- type: nauc_map_at_100_std
value: -0.5018270195287452
- type: nauc_map_at_10_diff1
value: 48.928085770942
- type: nauc_map_at_10_max
value: 36.73594814898575
- type: nauc_map_at_10_std
value: -0.834228741972828
- type: nauc_map_at_1_diff1
value: 54.15059861768532
- type: nauc_map_at_1_max
value: 36.44764098320589
- type: nauc_map_at_1_std
value: -5.784565726873563
- type: nauc_map_at_20_diff1
value: 48.78043391669103
- type: nauc_map_at_20_max
value: 36.89270974821098
- type: nauc_map_at_20_std
value: -0.5945049292688708
- type: nauc_map_at_3_diff1
value: 49.79196039319051
- type: nauc_map_at_3_max
value: 36.09927970784603
- type: nauc_map_at_3_std
value: -2.0296894202771667
- type: nauc_map_at_5_diff1
value: 49.529286793014634
- type: nauc_map_at_5_max
value: 36.62049971049548
- type: nauc_map_at_5_std
value: -1.0187508539964767
- type: nauc_mrr_at_1000_diff1
value: 47.26105007482722
- type: nauc_mrr_at_1000_max
value: 37.69068231080959
- type: nauc_mrr_at_1000_std
value: -0.6510844517264812
- type: nauc_mrr_at_100_diff1
value: 47.25846776943622
- type: nauc_mrr_at_100_max
value: 37.67838976933151
- type: nauc_mrr_at_100_std
value: -0.6433335236107469
- type: nauc_mrr_at_10_diff1
value: 47.18519224298452
- type: nauc_mrr_at_10_max
value: 37.62431544151827
- type: nauc_mrr_at_10_std
value: -0.8474316078853749
- type: nauc_mrr_at_1_diff1
value: 51.77981410020824
- type: nauc_mrr_at_1_max
value: 38.02059405009231
- type: nauc_mrr_at_1_std
value: -5.783426776910806
- type: nauc_mrr_at_20_diff1
value: 47.14864249544432
- type: nauc_mrr_at_20_max
value: 37.601607893461406
- type: nauc_mrr_at_20_std
value: -0.6859574897303896
- type: nauc_mrr_at_3_diff1
value: 47.58252175947335
- type: nauc_mrr_at_3_max
value: 37.6324837651506
- type: nauc_mrr_at_3_std
value: -1.2482167973735598
- type: nauc_mrr_at_5_diff1
value: 47.448011129354974
- type: nauc_mrr_at_5_max
value: 37.7148441309698
- type: nauc_mrr_at_5_std
value: -0.7119792397225159
- type: nauc_ndcg_at_1000_diff1
value: 46.6329460576133
- type: nauc_ndcg_at_1000_max
value: 37.51805344108184
- type: nauc_ndcg_at_1000_std
value: 1.8100059353579894
- type: nauc_ndcg_at_100_diff1
value: 46.66586884984403
- type: nauc_ndcg_at_100_max
value: 37.64300440363974
- type: nauc_ndcg_at_100_std
value: 2.500233245881423
- type: nauc_ndcg_at_10_diff1
value: 46.615015396347644
- type: nauc_ndcg_at_10_max
value: 36.78201798029491
- type: nauc_ndcg_at_10_std
value: 1.0809742189657263
- type: nauc_ndcg_at_1_diff1
value: 51.77981410020824
- type: nauc_ndcg_at_1_max
value: 38.02059405009231
- type: nauc_ndcg_at_1_std
value: -5.783426776910806
- type: nauc_ndcg_at_20_diff1
value: 46.282072099888325
- type: nauc_ndcg_at_20_max
value: 37.003478966138836
- type: nauc_ndcg_at_20_std
value: 1.9291637916464186
- type: nauc_ndcg_at_3_diff1
value: 47.539278944889126
- type: nauc_ndcg_at_3_max
value: 36.43508238199665
- type: nauc_ndcg_at_3_std
value: -0.6027788390857911
- type: nauc_ndcg_at_5_diff1
value: 47.55837749401022
- type: nauc_ndcg_at_5_max
value: 36.78249382035288
- type: nauc_ndcg_at_5_std
value: 0.8497645104808546
- type: nauc_precision_at_1000_diff1
value: -20.71803333315221
- type: nauc_precision_at_1000_max
value: -4.38547466190951
- type: nauc_precision_at_1000_std
value: -0.0853978825586052
- type: nauc_precision_at_100_diff1
value: -8.67085404598523
- type: nauc_precision_at_100_max
value: 9.733682801445893
- type: nauc_precision_at_100_std
value: 7.507170439875122
- type: nauc_precision_at_10_diff1
value: 14.495060576585853
- type: nauc_precision_at_10_max
value: 24.4514279841787
- type: nauc_precision_at_10_std
value: 5.59489027531012
- type: nauc_precision_at_1_diff1
value: 51.77981410020824
- type: nauc_precision_at_1_max
value: 38.02059405009231
- type: nauc_precision_at_1_std
value: -5.783426776910806
- type: nauc_precision_at_20_diff1
value: 6.509848499042286
- type: nauc_precision_at_20_max
value: 20.348715961396525
- type: nauc_precision_at_20_std
value: 8.193012313602315
- type: nauc_precision_at_3_diff1
value: 32.384501021918794
- type: nauc_precision_at_3_max
value: 31.935466435393828
- type: nauc_precision_at_3_std
value: 3.0560771209934994
- type: nauc_precision_at_5_diff1
value: 25.702459594777277
- type: nauc_precision_at_5_max
value: 30.014370132120067
- type: nauc_precision_at_5_std
value: 6.4512965213006925
- type: nauc_recall_at_1000_diff1
value: 36.20840483033314
- type: nauc_recall_at_1000_max
value: 45.47785143996727
- type: nauc_recall_at_1000_std
value: 37.14510941691126
- type: nauc_recall_at_100_diff1
value: 39.11101186057974
- type: nauc_recall_at_100_max
value: 38.066390280827925
- type: nauc_recall_at_100_std
value: 21.470218305879797
- type: nauc_recall_at_10_diff1
value: 39.70476039879197
- type: nauc_recall_at_10_max
value: 33.75721430862531
- type: nauc_recall_at_10_std
value: 6.8486633835335295
- type: nauc_recall_at_1_diff1
value: 54.15059861768532
- type: nauc_recall_at_1_max
value: 36.44764098320589
- type: nauc_recall_at_1_std
value: -5.784565726873563
- type: nauc_recall_at_20_diff1
value: 37.86978682409901
- type: nauc_recall_at_20_max
value: 33.96219184798075
- type: nauc_recall_at_20_std
value: 11.029348617729221
- type: nauc_recall_at_3_diff1
value: 43.72514359112328
- type: nauc_recall_at_3_max
value: 33.77645792572399
- type: nauc_recall_at_3_std
value: 2.428536024679842
- type: nauc_recall_at_5_diff1
value: 43.06859065126547
- type: nauc_recall_at_5_max
value: 34.665515195886755
- type: nauc_recall_at_5_std
value: 5.905094189769508
- type: ndcg_at_1
value: 35.075
- type: ndcg_at_10
value: 45.958
- type: ndcg_at_100
value: 51.353
- type: ndcg_at_1000
value: 53.173
- type: ndcg_at_20
value: 48.191
- type: ndcg_at_3
value: 40.473
- type: ndcg_at_5
value: 42.902
- type: precision_at_1
value: 35.075
- type: precision_at_10
value: 7.836
- type: precision_at_100
value: 1.176
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_20
value: 4.529
- type: precision_at_3
value: 18.315
- type: precision_at_5
value: 12.854
- type: recall_at_1
value: 30.048000000000002
- type: recall_at_10
value: 59.248
- type: recall_at_100
value: 82.111
- type: recall_at_1000
value: 94.592
- type: recall_at_20
value: 67.227
- type: recall_at_3
value: 44.471
- type: recall_at_5
value: 50.512
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval (default)
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: main_score
value: 43.951
- type: map_at_1
value: 27.964
- type: map_at_10
value: 37.692
- type: map_at_100
value: 39.365
- type: map_at_1000
value: 39.594
- type: map_at_20
value: 38.576
- type: map_at_3
value: 34.388999999999996
- type: map_at_5
value: 36.081
- type: mrr_at_1
value: 33.00395256916996
- type: mrr_at_10
value: 42.18434343434343
- type: mrr_at_100
value: 43.16939140168712
- type: mrr_at_1000
value: 43.21751142486867
- type: mrr_at_20
value: 42.75017657291823
- type: mrr_at_3
value: 39.42687747035574
- type: mrr_at_5
value: 41.037549407114625
- type: nauc_map_at_1000_diff1
value: 45.415876956978444
- type: nauc_map_at_1000_max
value: 32.59464568060356
- type: nauc_map_at_1000_std
value: 4.262293486763028
- type: nauc_map_at_100_diff1
value: 45.313981831518504
- type: nauc_map_at_100_max
value: 32.68688502742583
- type: nauc_map_at_100_std
value: 4.039368086319619
- type: nauc_map_at_10_diff1
value: 45.92372812130138
- type: nauc_map_at_10_max
value: 32.37880184303658
- type: nauc_map_at_10_std
value: 2.7481583678385197
- type: nauc_map_at_1_diff1
value: 52.388363332106294
- type: nauc_map_at_1_max
value: 32.184315523196425
- type: nauc_map_at_1_std
value: -2.5295830272351103
- type: nauc_map_at_20_diff1
value: 45.32570996908948
- type: nauc_map_at_20_max
value: 32.48108405862084
- type: nauc_map_at_20_std
value: 3.3087482176392657
- type: nauc_map_at_3_diff1
value: 46.85896834397904
- type: nauc_map_at_3_max
value: 32.007995254903484
- type: nauc_map_at_3_std
value: 0.5938674689810656
- type: nauc_map_at_5_diff1
value: 46.04911706905517
- type: nauc_map_at_5_max
value: 31.503815774957864
- type: nauc_map_at_5_std
value: 1.696567086029842
- type: nauc_mrr_at_1000_diff1
value: 44.33835674531675
- type: nauc_mrr_at_1000_max
value: 31.313824311436395
- type: nauc_mrr_at_1000_std
value: 5.585471654306175
- type: nauc_mrr_at_100_diff1
value: 44.315294514270484
- type: nauc_mrr_at_100_max
value: 31.311504710219847
- type: nauc_mrr_at_100_std
value: 5.61460359116941
- type: nauc_mrr_at_10_diff1
value: 44.34727343874123
- type: nauc_mrr_at_10_max
value: 31.214381968197323
- type: nauc_mrr_at_10_std
value: 5.358694756592366
- type: nauc_mrr_at_1_diff1
value: 50.076532500963985
- type: nauc_mrr_at_1_max
value: 31.893100393844602
- type: nauc_mrr_at_1_std
value: 1.6345537979715576
- type: nauc_mrr_at_20_diff1
value: 44.1861019252696
- type: nauc_mrr_at_20_max
value: 31.18274283874542
- type: nauc_mrr_at_20_std
value: 5.4141357527576845
- type: nauc_mrr_at_3_diff1
value: 44.84108608280401
- type: nauc_mrr_at_3_max
value: 31.260937651084618
- type: nauc_mrr_at_3_std
value: 4.32099205393322
- type: nauc_mrr_at_5_diff1
value: 43.957386353594615
- type: nauc_mrr_at_5_max
value: 30.521363697945542
- type: nauc_mrr_at_5_std
value: 5.111409983030411
- type: nauc_ndcg_at_1000_diff1
value: 43.302642169855055
- type: nauc_ndcg_at_1000_max
value: 33.60452429135082
- type: nauc_ndcg_at_1000_std
value: 8.11547083584825
- type: nauc_ndcg_at_100_diff1
value: 42.2303708262867
- type: nauc_ndcg_at_100_max
value: 33.14409254803362
- type: nauc_ndcg_at_100_std
value: 8.506478151524918
- type: nauc_ndcg_at_10_diff1
value: 43.767161847177874
- type: nauc_ndcg_at_10_max
value: 32.07274047816015
- type: nauc_ndcg_at_10_std
value: 6.481707365740993
- type: nauc_ndcg_at_1_diff1
value: 50.076532500963985
- type: nauc_ndcg_at_1_max
value: 31.893100393844602
- type: nauc_ndcg_at_1_std
value: 1.6345537979715576
- type: nauc_ndcg_at_20_diff1
value: 42.48660354871869
- type: nauc_ndcg_at_20_max
value: 32.14769800363052
- type: nauc_ndcg_at_20_std
value: 6.916826847813196
- type: nauc_ndcg_at_3_diff1
value: 44.243795943637885
- type: nauc_ndcg_at_3_max
value: 31.48406187592552
- type: nauc_ndcg_at_3_std
value: 3.701214987805142
- type: nauc_ndcg_at_5_diff1
value: 43.10518503245774
- type: nauc_ndcg_at_5_max
value: 30.40120224782154
- type: nauc_ndcg_at_5_std
value: 5.546435005776079
- type: nauc_precision_at_1000_diff1
value: -3.993607814341118
- type: nauc_precision_at_1000_max
value: -10.729918180758647
- type: nauc_precision_at_1000_std
value: 23.024270860729565
- type: nauc_precision_at_100_diff1
value: -1.6566704673461674
- type: nauc_precision_at_100_max
value: 1.458081777116833
- type: nauc_precision_at_100_std
value: 28.18670349958774
- type: nauc_precision_at_10_diff1
value: 12.792685733612547
- type: nauc_precision_at_10_max
value: 20.206988909219923
- type: nauc_precision_at_10_std
value: 22.53427005574754
- type: nauc_precision_at_1_diff1
value: 50.076532500963985
- type: nauc_precision_at_1_max
value: 31.893100393844602
- type: nauc_precision_at_1_std
value: 1.6345537979715576
- type: nauc_precision_at_20_diff1
value: 3.9538716249460384
- type: nauc_precision_at_20_max
value: 16.21789405497108
- type: nauc_precision_at_20_std
value: 24.348575609653487
- type: nauc_precision_at_3_diff1
value: 27.339649813425037
- type: nauc_precision_at_3_max
value: 26.223578620825194
- type: nauc_precision_at_3_std
value: 10.996293038771013
- type: nauc_precision_at_5_diff1
value: 18.869561918004056
- type: nauc_precision_at_5_max
value: 20.709270779442967
- type: nauc_precision_at_5_std
value: 17.384126283115698
- type: nauc_recall_at_1000_diff1
value: 16.194455177769477
- type: nauc_recall_at_1000_max
value: 58.66023925715464
- type: nauc_recall_at_1000_std
value: 58.25233058362688
- type: nauc_recall_at_100_diff1
value: 21.15194880649059
- type: nauc_recall_at_100_max
value: 32.44572125606809
- type: nauc_recall_at_100_std
value: 31.94013583626886
- type: nauc_recall_at_10_diff1
value: 37.66956774103016
- type: nauc_recall_at_10_max
value: 30.925800174559832
- type: nauc_recall_at_10_std
value: 9.299447104776808
- type: nauc_recall_at_1_diff1
value: 52.388363332106294
- type: nauc_recall_at_1_max
value: 32.184315523196425
- type: nauc_recall_at_1_std
value: -2.5295830272351103
- type: nauc_recall_at_20_diff1
value: 31.552065521976175
- type: nauc_recall_at_20_max
value: 29.74690417386352
- type: nauc_recall_at_20_std
value: 14.180880251108768
- type: nauc_recall_at_3_diff1
value: 40.454215107630645
- type: nauc_recall_at_3_max
value: 30.042646762149484
- type: nauc_recall_at_3_std
value: 2.8753957129080447
- type: nauc_recall_at_5_diff1
value: 36.586530595627345
- type: nauc_recall_at_5_max
value: 27.14535453599763
- type: nauc_recall_at_5_std
value: 5.997416531615016
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 43.951
- type: ndcg_at_100
value: 49.741
- type: ndcg_at_1000
value: 51.946000000000005
- type: ndcg_at_20
value: 46.168
- type: ndcg_at_3
value: 38.550000000000004
- type: ndcg_at_5
value: 41.014
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 8.577
- type: precision_at_100
value: 1.617
- type: precision_at_1000
value: 0.247
- type: precision_at_20
value: 5.346
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.281
- type: recall_at_1
value: 27.964
- type: recall_at_10
value: 55.702
- type: recall_at_100
value: 81.69999999999999
- type: recall_at_1000
value: 94.926
- type: recall_at_20
value: 64.142
- type: recall_at_3
value: 40.793
- type: recall_at_5
value: 47.046
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval (default)
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: main_score
value: 36.787
- type: map_at_1
value: 23.915
- type: map_at_10
value: 31.735000000000003
- type: map_at_100
value: 32.806000000000004
- type: map_at_1000
value: 32.9
- type: map_at_20
value: 32.301
- type: map_at_3
value: 28.436
- type: map_at_5
value: 30.575999999999997
- type: mrr_at_1
value: 25.87800369685767
- type: mrr_at_10
value: 33.96487985212568
- type: mrr_at_100
value: 34.89689439154211
- type: mrr_at_1000
value: 34.95770776172314
- type: mrr_at_20
value: 34.46162046071626
- type: mrr_at_3
value: 31.022797288971038
- type: mrr_at_5
value: 32.991373998767706
- type: nauc_map_at_1000_diff1
value: 41.411411226747745
- type: nauc_map_at_1000_max
value: 25.65879736535548
- type: nauc_map_at_1000_std
value: -1.0008275040804908
- type: nauc_map_at_100_diff1
value: 41.41167985449119
- type: nauc_map_at_100_max
value: 25.6584285870538
- type: nauc_map_at_100_std
value: -1.0142856959019102
- type: nauc_map_at_10_diff1
value: 41.56309522812082
- type: nauc_map_at_10_max
value: 25.66930315132308
- type: nauc_map_at_10_std
value: -1.5502752272271925
- type: nauc_map_at_1_diff1
value: 49.425905570437116
- type: nauc_map_at_1_max
value: 23.541197544220545
- type: nauc_map_at_1_std
value: -4.360019071552991
- type: nauc_map_at_20_diff1
value: 41.38734082223361
- type: nauc_map_at_20_max
value: 25.620079428409127
- type: nauc_map_at_20_std
value: -1.4042978268225208
- type: nauc_map_at_3_diff1
value: 43.620208615142644
- type: nauc_map_at_3_max
value: 25.71853688922115
- type: nauc_map_at_3_std
value: -1.8769387740803976
- type: nauc_map_at_5_diff1
value: 41.97672177355559
- type: nauc_map_at_5_max
value: 26.035163926212334
- type: nauc_map_at_5_std
value: -2.11363374949669
- type: nauc_mrr_at_1000_diff1
value: 40.49508214793536
- type: nauc_mrr_at_1000_max
value: 26.620330593078616
- type: nauc_mrr_at_1000_std
value: -0.3634968622281096
- type: nauc_mrr_at_100_diff1
value: 40.465539927932895
- type: nauc_mrr_at_100_max
value: 26.61340099486517
- type: nauc_mrr_at_100_std
value: -0.35206443295384626
- type: nauc_mrr_at_10_diff1
value: 40.573109996611144
- type: nauc_mrr_at_10_max
value: 26.71149031482008
- type: nauc_mrr_at_10_std
value: -0.9166267231737095
- type: nauc_mrr_at_1_diff1
value: 48.29138921797353
- type: nauc_mrr_at_1_max
value: 24.927185077919813
- type: nauc_mrr_at_1_std
value: -4.332258870474254
- type: nauc_mrr_at_20_diff1
value: 40.40723703282917
- type: nauc_mrr_at_20_max
value: 26.59812216818852
- type: nauc_mrr_at_20_std
value: -0.6209755736362238
- type: nauc_mrr_at_3_diff1
value: 42.1104901364276
- type: nauc_mrr_at_3_max
value: 27.158847936548643
- type: nauc_mrr_at_3_std
value: -0.4768337585685568
- type: nauc_mrr_at_5_diff1
value: 40.822869162681044
- type: nauc_mrr_at_5_max
value: 27.137910001879362
- type: nauc_mrr_at_5_std
value: -0.9466391394053442
- type: nauc_ndcg_at_1000_diff1
value: 38.696314753739436
- type: nauc_ndcg_at_1000_max
value: 26.428473010143723
- type: nauc_ndcg_at_1000_std
value: 2.3402588363330272
- type: nauc_ndcg_at_100_diff1
value: 37.898005515159134
- type: nauc_ndcg_at_100_max
value: 25.68578401772755
- type: nauc_ndcg_at_100_std
value: 2.6295479217711453
- type: nauc_ndcg_at_10_diff1
value: 38.28392376933128
- type: nauc_ndcg_at_10_max
value: 25.850126852320628
- type: nauc_ndcg_at_10_std
value: -0.5560800621942364
- type: nauc_ndcg_at_1_diff1
value: 48.29138921797353
- type: nauc_ndcg_at_1_max
value: 24.927185077919813
- type: nauc_ndcg_at_1_std
value: -4.332258870474254
- type: nauc_ndcg_at_20_diff1
value: 37.673206490621396
- type: nauc_ndcg_at_20_max
value: 25.583716405723937
- type: nauc_ndcg_at_20_std
value: 0.1953323128781521
- type: nauc_ndcg_at_3_diff1
value: 41.41453304326318
- type: nauc_ndcg_at_3_max
value: 26.61748802333722
- type: nauc_ndcg_at_3_std
value: -0.5476999435389482
- type: nauc_ndcg_at_5_diff1
value: 38.98483145760039
- type: nauc_ndcg_at_5_max
value: 26.777342255255647
- type: nauc_ndcg_at_5_std
value: -1.3051979393226087
- type: nauc_precision_at_1000_diff1
value: -14.856110292516775
- type: nauc_precision_at_1000_max
value: -5.848771877910694
- type: nauc_precision_at_1000_std
value: 15.34411836334217
- type: nauc_precision_at_100_diff1
value: 3.4939759054218333
- type: nauc_precision_at_100_max
value: 16.356980505161676
- type: nauc_precision_at_100_std
value: 24.608528146713404
- type: nauc_precision_at_10_diff1
value: 18.407011878399366
- type: nauc_precision_at_10_max
value: 24.800531781431303
- type: nauc_precision_at_10_std
value: 8.698077886826768
- type: nauc_precision_at_1_diff1
value: 48.29138921797353
- type: nauc_precision_at_1_max
value: 24.927185077919813
- type: nauc_precision_at_1_std
value: -4.332258870474254
- type: nauc_precision_at_20_diff1
value: 14.541755251519852
- type: nauc_precision_at_20_max
value: 21.97457692156994
- type: nauc_precision_at_20_std
value: 11.578274506336108
- type: nauc_precision_at_3_diff1
value: 33.23900172092169
- type: nauc_precision_at_3_max
value: 28.967167315040072
- type: nauc_precision_at_3_std
value: 3.6476384007647136
- type: nauc_precision_at_5_diff1
value: 24.289869074161572
- type: nauc_precision_at_5_max
value: 30.194681915534748
- type: nauc_precision_at_5_std
value: 4.054952118325518
- type: nauc_recall_at_1000_diff1
value: 29.11829826259677
- type: nauc_recall_at_1000_max
value: 39.25426036108557
- type: nauc_recall_at_1000_std
value: 36.3591900236558
- type: nauc_recall_at_100_diff1
value: 22.900753883773152
- type: nauc_recall_at_100_max
value: 20.40038512546472
- type: nauc_recall_at_100_std
value: 20.736883688677032
- type: nauc_recall_at_10_diff1
value: 29.183788265901534
- type: nauc_recall_at_10_max
value: 24.025061243297948
- type: nauc_recall_at_10_std
value: 0.8086675135479778
- type: nauc_recall_at_1_diff1
value: 49.425905570437116
- type: nauc_recall_at_1_max
value: 23.541197544220545
- type: nauc_recall_at_1_std
value: -4.360019071552991
- type: nauc_recall_at_20_diff1
value: 26.21751562892008
- type: nauc_recall_at_20_max
value: 22.78118083757151
- type: nauc_recall_at_20_std
value: 3.6627753391462825
- type: nauc_recall_at_3_diff1
value: 37.20946031817167
- type: nauc_recall_at_3_max
value: 27.059274716311005
- type: nauc_recall_at_3_std
value: 0.8325033099157856
- type: nauc_recall_at_5_diff1
value: 31.269097954181547
- type: nauc_recall_at_5_max
value: 26.853918763485463
- type: nauc_recall_at_5_std
value: -0.9226280392689135
- type: ndcg_at_1
value: 25.878
- type: ndcg_at_10
value: 36.787
- type: ndcg_at_100
value: 42.085
- type: ndcg_at_1000
value: 44.303
- type: ndcg_at_20
value: 38.690000000000005
- type: ndcg_at_3
value: 30.657
- type: ndcg_at_5
value: 34.242
- type: precision_at_1
value: 25.878
- type: precision_at_10
value: 5.86
- type: precision_at_100
value: 0.9209999999999999
- type: precision_at_1000
value: 0.123
- type: precision_at_20
value: 3.392
- type: precision_at_3
value: 12.815999999999999
- type: precision_at_5
value: 9.76
- type: recall_at_1
value: 23.915
- type: recall_at_10
value: 50.196
- type: recall_at_100
value: 74.66199999999999
- type: recall_at_1000
value: 90.949
- type: recall_at_20
value: 57.404999999999994
- type: recall_at_3
value: 34.156
- type: recall_at_5
value: 42.671
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER (default)
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: main_score
value: 37.835
- type: map_at_1
value: 16.408
- type: map_at_10
value: 28.102
- type: map_at_100
value: 30.245
- type: map_at_1000
value: 30.44
- type: map_at_20
value: 29.325000000000003
- type: map_at_3
value: 23.49
- type: map_at_5
value: 26.075
- type: mrr_at_1
value: 36.48208469055375
- type: mrr_at_10
value: 49.35310997363119
- type: mrr_at_100
value: 50.12144284733654
- type: mrr_at_1000
value: 50.14901403511052
- type: mrr_at_20
value: 49.86902911912245
- type: mrr_at_3
value: 46.3952225841477
- type: mrr_at_5
value: 48.16720955483177
- type: nauc_map_at_1000_diff1
value: 25.310850675849366
- type: nauc_map_at_1000_max
value: 37.09503121120242
- type: nauc_map_at_1000_std
value: 20.554977994819744
- type: nauc_map_at_100_diff1
value: 25.299966872724244
- type: nauc_map_at_100_max
value: 37.07757844963315
- type: nauc_map_at_100_std
value: 20.51941286942183
- type: nauc_map_at_10_diff1
value: 24.97097616375397
- type: nauc_map_at_10_max
value: 36.21802106435102
- type: nauc_map_at_10_std
value: 19.04179638942543
- type: nauc_map_at_1_diff1
value: 31.079857565386533
- type: nauc_map_at_1_max
value: 31.982413172438463
- type: nauc_map_at_1_std
value: 10.837120383351104
- type: nauc_map_at_20_diff1
value: 25.274561705603706
- type: nauc_map_at_20_max
value: 36.846696717838334
- type: nauc_map_at_20_std
value: 20.073241003865924
- type: nauc_map_at_3_diff1
value: 26.01764061167898
- type: nauc_map_at_3_max
value: 33.20138049456973
- type: nauc_map_at_3_std
value: 14.230139192374121
- type: nauc_map_at_5_diff1
value: 25.09123372044605
- type: nauc_map_at_5_max
value: 34.89124594920631
- type: nauc_map_at_5_std
value: 16.70319126587545
- type: nauc_mrr_at_1000_diff1
value: 26.375252226612467
- type: nauc_mrr_at_1000_max
value: 35.477327849397575
- type: nauc_mrr_at_1000_std
value: 21.16791565302958
- type: nauc_mrr_at_100_diff1
value: 26.377160750801053
- type: nauc_mrr_at_100_max
value: 35.49211341503135
- type: nauc_mrr_at_100_std
value: 21.19391590137402
- type: nauc_mrr_at_10_diff1
value: 26.311212981822052
- type: nauc_mrr_at_10_max
value: 35.588662356341594
- type: nauc_mrr_at_10_std
value: 21.24369092394658
- type: nauc_mrr_at_1_diff1
value: 27.198678190552865
- type: nauc_mrr_at_1_max
value: 31.017785831517703
- type: nauc_mrr_at_1_std
value: 16.42737819423067
- type: nauc_mrr_at_20_diff1
value: 26.32032615102818
- type: nauc_mrr_at_20_max
value: 35.57367760733253
- type: nauc_mrr_at_20_std
value: 21.29294301389274
- type: nauc_mrr_at_3_diff1
value: 26.092036806660612
- type: nauc_mrr_at_3_max
value: 34.31665231049064
- type: nauc_mrr_at_3_std
value: 19.6674385140531
- type: nauc_mrr_at_5_diff1
value: 26.151603897636
- type: nauc_mrr_at_5_max
value: 35.17650680885225
- type: nauc_mrr_at_5_std
value: 20.573080891241787
- type: nauc_ndcg_at_1000_diff1
value: 25.65498442794641
- type: nauc_ndcg_at_1000_max
value: 40.084443405536575
- type: nauc_ndcg_at_1000_std
value: 26.795793663747304
- type: nauc_ndcg_at_100_diff1
value: 25.237187946595334
- type: nauc_ndcg_at_100_max
value: 40.07873047722652
- type: nauc_ndcg_at_100_std
value: 26.7859861991128
- type: nauc_ndcg_at_10_diff1
value: 24.236337614114206
- type: nauc_ndcg_at_10_max
value: 38.22607740025273
- type: nauc_ndcg_at_10_std
value: 23.272039117089907
- type: nauc_ndcg_at_1_diff1
value: 27.198678190552865
- type: nauc_ndcg_at_1_max
value: 31.017785831517703
- type: nauc_ndcg_at_1_std
value: 16.42737819423067
- type: nauc_ndcg_at_20_diff1
value: 24.724738711624312
- type: nauc_ndcg_at_20_max
value: 39.24548121605356
- type: nauc_ndcg_at_20_std
value: 25.228893154519525
- type: nauc_ndcg_at_3_diff1
value: 24.658317235435362
- type: nauc_ndcg_at_3_max
value: 33.335101247559486
- type: nauc_ndcg_at_3_std
value: 17.01054703727399
- type: nauc_ndcg_at_5_diff1
value: 24.31704097148463
- type: nauc_ndcg_at_5_max
value: 36.14336690565576
- type: nauc_ndcg_at_5_std
value: 19.69214379372329
- type: nauc_precision_at_1000_diff1
value: -2.8924045105824114
- type: nauc_precision_at_1000_max
value: 5.89979568196701
- type: nauc_precision_at_1000_std
value: 19.595702020634185
- type: nauc_precision_at_100_diff1
value: 3.8998389837458203
- type: nauc_precision_at_100_max
value: 19.95415054849711
- type: nauc_precision_at_100_std
value: 29.065971451387774
- type: nauc_precision_at_10_diff1
value: 9.462651146259638
- type: nauc_precision_at_10_max
value: 29.680510389273447
- type: nauc_precision_at_10_std
value: 29.345395013388686
- type: nauc_precision_at_1_diff1
value: 27.198678190552865
- type: nauc_precision_at_1_max
value: 31.017785831517703
- type: nauc_precision_at_1_std
value: 16.42737819423067
- type: nauc_precision_at_20_diff1
value: 8.261243519089712
- type: nauc_precision_at_20_max
value: 27.929320115110023
- type: nauc_precision_at_20_std
value: 31.459012229844742
- type: nauc_precision_at_3_diff1
value: 15.273777636613955
- type: nauc_precision_at_3_max
value: 28.204944302903996
- type: nauc_precision_at_3_std
value: 19.80674678483048
- type: nauc_precision_at_5_diff1
value: 11.487918382134389
- type: nauc_precision_at_5_max
value: 28.62173130088314
- type: nauc_precision_at_5_std
value: 23.626716801834526
- type: nauc_recall_at_1000_diff1
value: 22.332855309918482
- type: nauc_recall_at_1000_max
value: 46.19202209060043
- type: nauc_recall_at_1000_std
value: 48.263282583608465
- type: nauc_recall_at_100_diff1
value: 18.606992875038713
- type: nauc_recall_at_100_max
value: 39.8050305915271
- type: nauc_recall_at_100_std
value: 36.24645472497941
- type: nauc_recall_at_10_diff1
value: 18.232071663795725
- type: nauc_recall_at_10_max
value: 37.67075857623269
- type: nauc_recall_at_10_std
value: 26.788012514411548
- type: nauc_recall_at_1_diff1
value: 31.079857565386533
- type: nauc_recall_at_1_max
value: 31.982413172438463
- type: nauc_recall_at_1_std
value: 10.837120383351104
- type: nauc_recall_at_20_diff1
value: 18.306236535885443
- type: nauc_recall_at_20_max
value: 38.24540146525127
- type: nauc_recall_at_20_std
value: 30.329987162287033
- type: nauc_recall_at_3_diff1
value: 22.00237059430624
- type: nauc_recall_at_3_max
value: 32.60315366638792
- type: nauc_recall_at_3_std
value: 15.991207369096077
- type: nauc_recall_at_5_diff1
value: 19.305335536530087
- type: nauc_recall_at_5_max
value: 35.001491825528966
- type: nauc_recall_at_5_std
value: 20.46796749831726
- type: ndcg_at_1
value: 36.482
- type: ndcg_at_10
value: 37.835
- type: ndcg_at_100
value: 45.332
- type: ndcg_at_1000
value: 48.503
- type: ndcg_at_20
value: 40.991
- type: ndcg_at_3
value: 31.735999999999997
- type: ndcg_at_5
value: 34.015
- type: precision_at_1
value: 36.482
- type: precision_at_10
value: 11.726
- type: precision_at_100
value: 1.978
- type: precision_at_1000
value: 0.258
- type: precision_at_20
value: 7.234999999999999
- type: precision_at_3
value: 23.822
- type: precision_at_5
value: 18.319
- type: recall_at_1
value: 16.408
- type: recall_at_10
value: 43.915
- type: recall_at_100
value: 69.173
- type: recall_at_1000
value: 86.58
- type: recall_at_20
value: 52.744
- type: recall_at_3
value: 28.682999999999996
- type: recall_at_5
value: 35.481
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: dev
type: mteb/dbpedia
metrics:
- type: main_score
value: 55.144000000000005
- type: map_at_1
value: 11.826
- type: map_at_10
value: 27.172
- type: map_at_100
value: 38.257000000000005
- type: map_at_1000
value: 40.097
- type: map_at_20
value: 32.123000000000005
- type: map_at_3
value: 19.369
- type: map_at_5
value: 22.351
- type: mrr_at_1
value: 80.59701492537313
- type: mrr_at_10
value: 86.33499170812604
- type: mrr_at_100
value: 86.45227090143814
- type: mrr_at_1000
value: 86.45227090143814
- type: mrr_at_20
value: 86.40961857379767
- type: mrr_at_3
value: 85.57213930348257
- type: mrr_at_5
value: 86.16915422885573
- type: nauc_map_at_1000_diff1
value: 31.072194916682385
- type: nauc_map_at_1000_max
value: 21.804811518161618
- type: nauc_map_at_1000_std
value: -2.951237857245905
- type: nauc_map_at_100_diff1
value: 32.56060360145279
- type: nauc_map_at_100_max
value: 21.242298925848857
- type: nauc_map_at_100_std
value: -6.601591083112349
- type: nauc_map_at_10_diff1
value: 45.43742246641206
- type: nauc_map_at_10_max
value: 17.21692770004215
- type: nauc_map_at_10_std
value: -26.109238645663996
- type: nauc_map_at_1_diff1
value: 59.342871771182246
- type: nauc_map_at_1_max
value: 7.61369981711965
- type: nauc_map_at_1_std
value: -43.77056595417028
- type: nauc_map_at_20_diff1
value: 41.28476777471806
- type: nauc_map_at_20_max
value: 19.146619219149965
- type: nauc_map_at_20_std
value: -18.138173228934672
- type: nauc_map_at_3_diff1
value: 50.01554010863971
- type: nauc_map_at_3_max
value: 8.780067252066651
- type: nauc_map_at_3_std
value: -38.97142391357302
- type: nauc_map_at_5_diff1
value: 49.10129058095009
- type: nauc_map_at_5_max
value: 11.656196663534313
- type: nauc_map_at_5_std
value: -34.72355570603387
- type: nauc_mrr_at_1000_diff1
value: 58.78754980587956
- type: nauc_mrr_at_1000_max
value: 49.8860031204746
- type: nauc_mrr_at_1000_std
value: 8.296926794472618
- type: nauc_mrr_at_100_diff1
value: 58.78754980587956
- type: nauc_mrr_at_100_max
value: 49.8860031204746
- type: nauc_mrr_at_100_std
value: 8.296926794472618
- type: nauc_mrr_at_10_diff1
value: 58.91162028285357
- type: nauc_mrr_at_10_max
value: 50.335451094273985
- type: nauc_mrr_at_10_std
value: 9.007586894775534
- type: nauc_mrr_at_1_diff1
value: 57.59201084653059
- type: nauc_mrr_at_1_max
value: 37.00330988333697
- type: nauc_mrr_at_1_std
value: -1.747744103132987
- type: nauc_mrr_at_20_diff1
value: 58.75119254917311
- type: nauc_mrr_at_20_max
value: 50.05039741296804
- type: nauc_mrr_at_20_std
value: 8.560730939300612
- type: nauc_mrr_at_3_diff1
value: 59.25818070675737
- type: nauc_mrr_at_3_max
value: 50.21290391831141
- type: nauc_mrr_at_3_std
value: 5.888545263632479
- type: nauc_mrr_at_5_diff1
value: 58.86883176773856
- type: nauc_mrr_at_5_max
value: 50.957401246316245
- type: nauc_mrr_at_5_std
value: 9.799770718943135
- type: nauc_ndcg_at_1000_diff1
value: 31.017440394196054
- type: nauc_ndcg_at_1000_max
value: 34.76839774920455
- type: nauc_ndcg_at_1000_std
value: 18.394503679584197
- type: nauc_ndcg_at_100_diff1
value: 33.46897937355806
- type: nauc_ndcg_at_100_max
value: 30.1308096551965
- type: nauc_ndcg_at_100_std
value: 4.811329419196584
- type: nauc_ndcg_at_10_diff1
value: 34.738421563806796
- type: nauc_ndcg_at_10_max
value: 31.63787832072571
- type: nauc_ndcg_at_10_std
value: 6.047471445378135
- type: nauc_ndcg_at_1_diff1
value: 41.838767871859105
- type: nauc_ndcg_at_1_max
value: 29.76412378121819
- type: nauc_ndcg_at_1_std
value: -6.662981751747337
- type: nauc_ndcg_at_20_diff1
value: 37.2936047770493
- type: nauc_ndcg_at_20_max
value: 27.509688843351928
- type: nauc_ndcg_at_20_std
value: -4.226207480988211
- type: nauc_ndcg_at_3_diff1
value: 26.741771232683075
- type: nauc_ndcg_at_3_max
value: 27.39386896838887
- type: nauc_ndcg_at_3_std
value: 1.6639808702221104
- type: nauc_ndcg_at_5_diff1
value: 32.70843930376316
- type: nauc_ndcg_at_5_max
value: 27.924846120043256
- type: nauc_ndcg_at_5_std
value: 6.138807313274158
- type: nauc_precision_at_1000_diff1
value: -32.41203303482423
- type: nauc_precision_at_1000_max
value: 8.093545818882905
- type: nauc_precision_at_1000_std
value: 47.02494471043404
- type: nauc_precision_at_100_diff1
value: -31.578281780421502
- type: nauc_precision_at_100_max
value: 11.08125301543009
- type: nauc_precision_at_100_std
value: 50.533022672180394
- type: nauc_precision_at_10_diff1
value: -22.738530687885405
- type: nauc_precision_at_10_max
value: 23.330840950192325
- type: nauc_precision_at_10_std
value: 50.76435402136226
- type: nauc_precision_at_1_diff1
value: 57.59201084653059
- type: nauc_precision_at_1_max
value: 37.00330988333697
- type: nauc_precision_at_1_std
value: -1.747744103132987
- type: nauc_precision_at_20_diff1
value: -25.002019953837003
- type: nauc_precision_at_20_max
value: 16.971378988976706
- type: nauc_precision_at_20_std
value: 48.07345104684135
- type: nauc_precision_at_3_diff1
value: -8.197173818536056
- type: nauc_precision_at_3_max
value: 25.695195187226403
- type: nauc_precision_at_3_std
value: 31.111863515602995
- type: nauc_precision_at_5_diff1
value: -12.956574437433844
- type: nauc_precision_at_5_max
value: 21.41273346493039
- type: nauc_precision_at_5_std
value: 42.55631329398401
- type: nauc_recall_at_1000_diff1
value: 9.76915442349142
- type: nauc_recall_at_1000_max
value: 23.74302893109814
- type: nauc_recall_at_1000_std
value: 33.123159475147816
- type: nauc_recall_at_100_diff1
value: 13.96782611551897
- type: nauc_recall_at_100_max
value: 21.02306088177266
- type: nauc_recall_at_100_std
value: 3.0239346149170645
- type: nauc_recall_at_10_diff1
value: 36.502833630310036
- type: nauc_recall_at_10_max
value: 15.575967406133087
- type: nauc_recall_at_10_std
value: -25.645224052787295
- type: nauc_recall_at_1_diff1
value: 59.342871771182246
- type: nauc_recall_at_1_max
value: 7.61369981711965
- type: nauc_recall_at_1_std
value: -43.77056595417028
- type: nauc_recall_at_20_diff1
value: 26.27422331579885
- type: nauc_recall_at_20_max
value: 13.135043270702166
- type: nauc_recall_at_20_std
value: -19.92673944513883
- type: nauc_recall_at_3_diff1
value: 48.18220967640245
- type: nauc_recall_at_3_max
value: 9.54094958941248
- type: nauc_recall_at_3_std
value: -37.97033782144305
- type: nauc_recall_at_5_diff1
value: 46.575464923304686
- type: nauc_recall_at_5_max
value: 12.024807120200766
- type: nauc_recall_at_5_std
value: -33.73533843493903
- type: ndcg_at_1
value: 71.642
- type: ndcg_at_10
value: 55.144000000000005
- type: ndcg_at_100
value: 59.753
- type: ndcg_at_1000
value: 66.89500000000001
- type: ndcg_at_20
value: 54.114
- type: ndcg_at_3
value: 62.373
- type: ndcg_at_5
value: 57.926
- type: precision_at_1
value: 80.597
- type: precision_at_10
value: 41.343
- type: precision_at_100
value: 12.030000000000001
- type: precision_at_1000
value: 1.8270000000000002
- type: precision_at_20
value: 31.791000000000004
- type: precision_at_3
value: 63.682
- type: precision_at_5
value: 52.239000000000004
- type: recall_at_1
value: 11.826
- type: recall_at_10
value: 33.28
- type: recall_at_100
value: 65.91
- type: recall_at_1000
value: 88.39200000000001
- type: recall_at_20
value: 44.482
- type: recall_at_3
value: 20.377000000000002
- type: recall_at_5
value: 24.102999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: main_score
value: 46.062999999999995
- type: map_at_1
value: 9.913
- type: map_at_10
value: 22.713
- type: map_at_100
value: 32.995999999999995
- type: map_at_1000
value: 34.845
- type: map_at_20
value: 26.650000000000002
- type: map_at_3
value: 16.052
- type: map_at_5
value: 18.892999999999997
- type: mrr_at_1
value: 72.75
- type: mrr_at_10
value: 79.93075396825398
- type: mrr_at_100
value: 80.15202418448516
- type: mrr_at_1000
value: 80.16338022685652
- type: mrr_at_20
value: 80.10524750447352
- type: mrr_at_3
value: 78.375
- type: mrr_at_5
value: 79.5
- type: nauc_map_at_1000_diff1
value: 15.703992161125676
- type: nauc_map_at_1000_max
value: 23.35271482732561
- type: nauc_map_at_1000_std
value: 31.149527138283002
- type: nauc_map_at_100_diff1
value: 16.785306132760873
- type: nauc_map_at_100_max
value: 21.540254096945795
- type: nauc_map_at_100_std
value: 28.232069035246422
- type: nauc_map_at_10_diff1
value: 20.402743546183082
- type: nauc_map_at_10_max
value: 7.042045670852542
- type: nauc_map_at_10_std
value: 0.16763671800997607
- type: nauc_map_at_1_diff1
value: 35.775061062200926
- type: nauc_map_at_1_max
value: -3.2698850217174287
- type: nauc_map_at_1_std
value: -19.56795709087053
- type: nauc_map_at_20_diff1
value: 18.699651665323326
- type: nauc_map_at_20_max
value: 13.328266382559917
- type: nauc_map_at_20_std
value: 11.47185661443564
- type: nauc_map_at_3_diff1
value: 25.81987347945424
- type: nauc_map_at_3_max
value: -0.15648299152936088
- type: nauc_map_at_3_std
value: -13.835424548479757
- type: nauc_map_at_5_diff1
value: 23.439523519895587
- type: nauc_map_at_5_max
value: 1.5356852327250021
- type: nauc_map_at_5_std
value: -9.703910926625412
- type: nauc_mrr_at_1000_diff1
value: 52.46673675514906
- type: nauc_mrr_at_1000_max
value: 63.470733964613935
- type: nauc_mrr_at_1000_std
value: 45.63124329941225
- type: nauc_mrr_at_100_diff1
value: 52.453615789844285
- type: nauc_mrr_at_100_max
value: 63.46889395676577
- type: nauc_mrr_at_100_std
value: 45.60690760740741
- type: nauc_mrr_at_10_diff1
value: 52.418811815325775
- type: nauc_mrr_at_10_max
value: 63.458017896693896
- type: nauc_mrr_at_10_std
value: 45.69048100462888
- type: nauc_mrr_at_1_diff1
value: 51.64249864649329
- type: nauc_mrr_at_1_max
value: 61.7930671192988
- type: nauc_mrr_at_1_std
value: 45.65780424635283
- type: nauc_mrr_at_20_diff1
value: 52.51320760078821
- type: nauc_mrr_at_20_max
value: 63.45648957193841
- type: nauc_mrr_at_20_std
value: 45.643345257424215
- type: nauc_mrr_at_3_diff1
value: 52.684081166956375
- type: nauc_mrr_at_3_max
value: 63.47934202170013
- type: nauc_mrr_at_3_std
value: 45.258022228781805
- type: nauc_mrr_at_5_diff1
value: 52.404417203072725
- type: nauc_mrr_at_5_max
value: 63.622003998330335
- type: nauc_mrr_at_5_std
value: 45.56023178180955
- type: nauc_ndcg_at_1000_diff1
value: 21.457460034962793
- type: nauc_ndcg_at_1000_max
value: 38.48004433256833
- type: nauc_ndcg_at_1000_std
value: 44.50501821602239
- type: nauc_ndcg_at_100_diff1
value: 22.96499973613431
- type: nauc_ndcg_at_100_max
value: 32.279961000176996
- type: nauc_ndcg_at_100_std
value: 36.24772810425709
- type: nauc_ndcg_at_10_diff1
value: 22.80486448431605
- type: nauc_ndcg_at_10_max
value: 31.855350572992712
- type: nauc_ndcg_at_10_std
value: 32.02098815228779
- type: nauc_ndcg_at_1_diff1
value: 42.52237678010534
- type: nauc_ndcg_at_1_max
value: 43.07107038550254
- type: nauc_ndcg_at_1_std
value: 32.29636539687786
- type: nauc_ndcg_at_20_diff1
value: 23.33376144999378
- type: nauc_ndcg_at_20_max
value: 29.47723113288734
- type: nauc_ndcg_at_20_std
value: 29.39360988758012
- type: nauc_ndcg_at_3_diff1
value: 26.354022177902426
- type: nauc_ndcg_at_3_max
value: 34.34518581558593
- type: nauc_ndcg_at_3_std
value: 30.620971800188308
- type: nauc_ndcg_at_5_diff1
value: 23.743192738244137
- type: nauc_ndcg_at_5_max
value: 31.84064266620126
- type: nauc_ndcg_at_5_std
value: 31.185813277650304
- type: nauc_precision_at_1000_diff1
value: -23.397310460810505
- type: nauc_precision_at_1000_max
value: 4.094434610744116
- type: nauc_precision_at_1000_std
value: 16.721869991290177
- type: nauc_precision_at_100_diff1
value: -9.979052269943192
- type: nauc_precision_at_100_max
value: 30.59858046499311
- type: nauc_precision_at_100_std
value: 48.98467116206844
- type: nauc_precision_at_10_diff1
value: -5.612358654181445
- type: nauc_precision_at_10_max
value: 38.881592521775225
- type: nauc_precision_at_10_std
value: 55.44555278772913
- type: nauc_precision_at_1_diff1
value: 51.64249864649329
- type: nauc_precision_at_1_max
value: 61.7930671192988
- type: nauc_precision_at_1_std
value: 45.65780424635283
- type: nauc_precision_at_20_diff1
value: -5.663214776548806
- type: nauc_precision_at_20_max
value: 37.95746951813096
- type: nauc_precision_at_20_std
value: 55.85134464939927
- type: nauc_precision_at_3_diff1
value: 5.956898719194746
- type: nauc_precision_at_3_max
value: 37.315381572930626
- type: nauc_precision_at_3_std
value: 43.463129246499506
- type: nauc_precision_at_5_diff1
value: -0.67640128719057
- type: nauc_precision_at_5_max
value: 36.05694594117169
- type: nauc_precision_at_5_std
value: 48.36937473304257
- type: nauc_recall_at_1000_diff1
value: 11.230184686028919
- type: nauc_recall_at_1000_max
value: 33.60147376937396
- type: nauc_recall_at_1000_std
value: 53.068732741076055
- type: nauc_recall_at_100_diff1
value: 15.566530633394684
- type: nauc_recall_at_100_max
value: 23.57721391991314
- type: nauc_recall_at_100_std
value: 31.386352775767566
- type: nauc_recall_at_10_diff1
value: 17.096462310522874
- type: nauc_recall_at_10_max
value: 2.2836136689655127
- type: nauc_recall_at_10_std
value: -4.65565377513818
- type: nauc_recall_at_1_diff1
value: 35.775061062200926
- type: nauc_recall_at_1_max
value: -3.2698850217174287
- type: nauc_recall_at_1_std
value: -19.56795709087053
- type: nauc_recall_at_20_diff1
value: 14.19787786895807
- type: nauc_recall_at_20_max
value: 7.524383196640643
- type: nauc_recall_at_20_std
value: 5.656566482975458
- type: nauc_recall_at_3_diff1
value: 23.847261122849588
- type: nauc_recall_at_3_max
value: -2.611801666377753
- type: nauc_recall_at_3_std
value: -16.43695458424158
- type: nauc_recall_at_5_diff1
value: 20.607771671835604
- type: nauc_recall_at_5_max
value: -2.949503014688604
- type: nauc_recall_at_5_std
value: -14.602394621100709
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 46.062999999999995
- type: ndcg_at_100
value: 51.717999999999996
- type: ndcg_at_1000
value: 59.181
- type: ndcg_at_20
value: 45.837
- type: ndcg_at_3
value: 50.568999999999996
- type: ndcg_at_5
value: 47.981
- type: precision_at_1
value: 72.75
- type: precision_at_10
value: 37.1
- type: precision_at_100
value: 11.98
- type: precision_at_1000
value: 2.284
- type: precision_at_20
value: 28.499999999999996
- type: precision_at_3
value: 54.833
- type: precision_at_5
value: 46.550000000000004
- type: recall_at_1
value: 9.913
- type: recall_at_10
value: 28.154
- type: recall_at_100
value: 58.841
- type: recall_at_1000
value: 82.329
- type: recall_at_20
value: 36.971
- type: recall_at_3
value: 17.336
- type: recall_at_5
value: 21.612000000000002
task:
type: Retrieval
- dataset:
config: default
name: MTEB FEVER (default)
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: dev
type: mteb/fever
metrics:
- type: main_score
value: 90.881
- type: map_at_1
value: 80.818
- type: map_at_10
value: 87.866
- type: map_at_100
value: 88.083
- type: map_at_1000
value: 88.095
- type: map_at_20
value: 87.991
- type: map_at_3
value: 87.069
- type: map_at_5
value: 87.569
- type: mrr_at_1
value: 87.56375637563757
- type: mrr_at_10
value: 92.82259178298779
- type: mrr_at_100
value: 92.84322154467066
- type: mrr_at_1000
value: 92.84344246383182
- type: mrr_at_20
value: 92.83903406133638
- type: mrr_at_3
value: 92.52175217521747
- type: mrr_at_5
value: 92.73627362736265
- type: nauc_map_at_1000_diff1
value: 46.87623575032174
- type: nauc_map_at_1000_max
value: 12.297201771693372
- type: nauc_map_at_1000_std
value: -9.479310845495277
- type: nauc_map_at_100_diff1
value: 46.84134556922246
- type: nauc_map_at_100_max
value: 12.292309938105879
- type: nauc_map_at_100_std
value: -9.466678629428921
- type: nauc_map_at_10_diff1
value: 46.181390015451946
- type: nauc_map_at_10_max
value: 11.927988984700725
- type: nauc_map_at_10_std
value: -9.666045508151084
- type: nauc_map_at_1_diff1
value: 53.10928810328134
- type: nauc_map_at_1_max
value: 7.540404621177918
- type: nauc_map_at_1_std
value: -13.906212384769297
- type: nauc_map_at_20_diff1
value: 46.49635746130797
- type: nauc_map_at_20_max
value: 12.13593751368467
- type: nauc_map_at_20_std
value: -9.607633449073036
- type: nauc_map_at_3_diff1
value: 45.940411564236655
- type: nauc_map_at_3_max
value: 11.433507590443073
- type: nauc_map_at_3_std
value: -10.96299821239248
- type: nauc_map_at_5_diff1
value: 45.87354953980392
- type: nauc_map_at_5_max
value: 11.548053546333442
- type: nauc_map_at_5_std
value: -10.299403473081103
- type: nauc_mrr_at_1000_diff1
value: 74.96436552895679
- type: nauc_mrr_at_1000_max
value: 15.081704623272563
- type: nauc_mrr_at_1000_std
value: -21.505452950257524
- type: nauc_mrr_at_100_diff1
value: 74.96337776424838
- type: nauc_mrr_at_100_max
value: 15.084165693265266
- type: nauc_mrr_at_100_std
value: -21.502705745641805
- type: nauc_mrr_at_10_diff1
value: 74.95512856225042
- type: nauc_mrr_at_10_max
value: 15.179216919044547
- type: nauc_mrr_at_10_std
value: -21.54772408489513
- type: nauc_mrr_at_1_diff1
value: 75.1059297404218
- type: nauc_mrr_at_1_max
value: 11.81006208731222
- type: nauc_mrr_at_1_std
value: -20.585909179161106
- type: nauc_mrr_at_20_diff1
value: 74.96842612971291
- type: nauc_mrr_at_20_max
value: 15.114351703094453
- type: nauc_mrr_at_20_std
value: -21.513817851207094
- type: nauc_mrr_at_3_diff1
value: 75.02285494504581
- type: nauc_mrr_at_3_max
value: 16.0556430520842
- type: nauc_mrr_at_3_std
value: -21.96831001623427
- type: nauc_mrr_at_5_diff1
value: 74.90651790965175
- type: nauc_mrr_at_5_max
value: 15.372261833733539
- type: nauc_mrr_at_5_std
value: -21.675988243802003
- type: nauc_ndcg_at_1000_diff1
value: 50.2435944626682
- type: nauc_ndcg_at_1000_max
value: 14.561661200135982
- type: nauc_ndcg_at_1000_std
value: -8.914496686293512
- type: nauc_ndcg_at_100_diff1
value: 49.45862609681797
- type: nauc_ndcg_at_100_max
value: 14.574933247820116
- type: nauc_ndcg_at_100_std
value: -8.401737989352354
- type: nauc_ndcg_at_10_diff1
value: 46.70923651777826
- type: nauc_ndcg_at_10_max
value: 13.472299853545234
- type: nauc_ndcg_at_10_std
value: -8.83553728476895
- type: nauc_ndcg_at_1_diff1
value: 75.1059297404218
- type: nauc_ndcg_at_1_max
value: 11.81006208731222
- type: nauc_ndcg_at_1_std
value: -20.585909179161106
- type: nauc_ndcg_at_20_diff1
value: 47.55000104826263
- type: nauc_ndcg_at_20_max
value: 14.006480095713588
- type: nauc_ndcg_at_20_std
value: -8.658752805425454
- type: nauc_ndcg_at_3_diff1
value: 47.637455273739995
- type: nauc_ndcg_at_3_max
value: 13.770838942196637
- type: nauc_ndcg_at_3_std
value: -11.280620068648076
- type: nauc_ndcg_at_5_diff1
value: 46.43880641265911
- type: nauc_ndcg_at_5_max
value: 13.08583931363886
- type: nauc_ndcg_at_5_std
value: -10.06515821709641
- type: nauc_precision_at_1000_diff1
value: -7.74658978838917
- type: nauc_precision_at_1000_max
value: 4.751261690843568
- type: nauc_precision_at_1000_std
value: 9.364113114197997
- type: nauc_precision_at_100_diff1
value: -6.8148922522222115
- type: nauc_precision_at_100_max
value: 6.972247112602814
- type: nauc_precision_at_100_std
value: 11.878899724333886
- type: nauc_precision_at_10_diff1
value: -9.26742080488489
- type: nauc_precision_at_10_max
value: 10.151685398959382
- type: nauc_precision_at_10_std
value: 12.57287300284158
- type: nauc_precision_at_1_diff1
value: 75.1059297404218
- type: nauc_precision_at_1_max
value: 11.81006208731222
- type: nauc_precision_at_1_std
value: -20.585909179161106
- type: nauc_precision_at_20_diff1
value: -9.46809712351495
- type: nauc_precision_at_20_max
value: 9.070842702517606
- type: nauc_precision_at_20_std
value: 12.63029281322448
- type: nauc_precision_at_3_diff1
value: 4.482731450261291
- type: nauc_precision_at_3_max
value: 15.23040684493045
- type: nauc_precision_at_3_std
value: 1.6067730909628326
- type: nauc_precision_at_5_diff1
value: -5.71269063574531
- type: nauc_precision_at_5_max
value: 11.572460670136449
- type: nauc_precision_at_5_std
value: 7.83824414993744
- type: nauc_recall_at_1000_diff1
value: 2.7016711342522663
- type: nauc_recall_at_1000_max
value: 38.550518524354906
- type: nauc_recall_at_1000_std
value: 46.777091414426614
- type: nauc_recall_at_100_diff1
value: 8.833739498081504
- type: nauc_recall_at_100_max
value: 28.457805489841665
- type: nauc_recall_at_100_std
value: 32.44508615804357
- type: nauc_recall_at_10_diff1
value: 9.414374970261905
- type: nauc_recall_at_10_max
value: 16.400771079732788
- type: nauc_recall_at_10_std
value: 11.211729067346221
- type: nauc_recall_at_1_diff1
value: 53.10928810328134
- type: nauc_recall_at_1_max
value: 7.540404621177918
- type: nauc_recall_at_1_std
value: -13.906212384769297
- type: nauc_recall_at_20_diff1
value: 7.2361585201604255
- type: nauc_recall_at_20_max
value: 19.916481947882193
- type: nauc_recall_at_20_std
value: 16.717994401180736
- type: nauc_recall_at_3_diff1
value: 23.19365013128098
- type: nauc_recall_at_3_max
value: 15.22562423195164
- type: nauc_recall_at_3_std
value: -3.6529481843146376
- type: nauc_recall_at_5_diff1
value: 15.503999284173625
- type: nauc_recall_at_5_max
value: 14.508056870663811
- type: nauc_recall_at_5_std
value: 1.978806929057799
- type: ndcg_at_1
value: 87.564
- type: ndcg_at_10
value: 90.881
- type: ndcg_at_100
value: 91.513
- type: ndcg_at_1000
value: 91.71000000000001
- type: ndcg_at_20
value: 91.148
- type: ndcg_at_3
value: 89.917
- type: ndcg_at_5
value: 90.434
- type: precision_at_1
value: 87.564
- type: precision_at_10
value: 10.711
- type: precision_at_100
value: 1.135
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 5.463
- type: precision_at_3
value: 33.993
- type: precision_at_5
value: 20.888
- type: recall_at_1
value: 80.818
- type: recall_at_10
value: 95.22800000000001
- type: recall_at_100
value: 97.52499999999999
- type: recall_at_1000
value: 98.691
- type: recall_at_20
value: 96.081
- type: recall_at_3
value: 92.43299999999999
- type: recall_at_5
value: 93.92200000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB FEVER (default)
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: main_score
value: 89.917
- type: map_at_1
value: 78.50200000000001
- type: map_at_10
value: 86.568
- type: map_at_100
value: 86.785
- type: map_at_1000
value: 86.797
- type: map_at_20
value: 86.701
- type: map_at_3
value: 85.59400000000001
- type: map_at_5
value: 86.223
- type: mrr_at_1
value: 84.77347734773477
- type: mrr_at_10
value: 91.097966939551
- type: mrr_at_100
value: 91.12558512468273
- type: mrr_at_1000
value: 91.1260701737618
- type: mrr_at_20
value: 91.11946032681844
- type: mrr_at_3
value: 90.68406840684058
- type: mrr_at_5
value: 90.98784878487835
- type: nauc_map_at_1000_diff1
value: 50.87906171648577
- type: nauc_map_at_1000_max
value: 7.146488902357113
- type: nauc_map_at_1000_std
value: -12.846432203603294
- type: nauc_map_at_100_diff1
value: 50.81856235257227
- type: nauc_map_at_100_max
value: 7.142093753041584
- type: nauc_map_at_100_std
value: -12.819609867775798
- type: nauc_map_at_10_diff1
value: 50.334680606872986
- type: nauc_map_at_10_max
value: 7.0836766324370695
- type: nauc_map_at_10_std
value: -12.768283326531977
- type: nauc_map_at_1_diff1
value: 56.03047128824491
- type: nauc_map_at_1_max
value: 1.9657828096288057
- type: nauc_map_at_1_std
value: -16.09258344775108
- type: nauc_map_at_20_diff1
value: 50.59898980840294
- type: nauc_map_at_20_max
value: 7.171824094888314
- type: nauc_map_at_20_std
value: -12.755654528759749
- type: nauc_map_at_3_diff1
value: 50.10970484630358
- type: nauc_map_at_3_max
value: 6.495427590658401
- type: nauc_map_at_3_std
value: -14.334341284587198
- type: nauc_map_at_5_diff1
value: 50.085796858441846
- type: nauc_map_at_5_max
value: 6.9713526722279235
- type: nauc_map_at_5_std
value: -13.24882433153497
- type: nauc_mrr_at_1000_diff1
value: 71.7413632225038
- type: nauc_mrr_at_1000_max
value: 3.865641782196838
- type: nauc_mrr_at_1000_std
value: -24.555236632082018
- type: nauc_mrr_at_100_diff1
value: 71.73848550292642
- type: nauc_mrr_at_100_max
value: 3.868547078561582
- type: nauc_mrr_at_100_std
value: -24.549516364510097
- type: nauc_mrr_at_10_diff1
value: 71.71567149170303
- type: nauc_mrr_at_10_max
value: 3.996112870850431
- type: nauc_mrr_at_10_std
value: -24.507926982679656
- type: nauc_mrr_at_1_diff1
value: 72.45922013700734
- type: nauc_mrr_at_1_max
value: 1.8703455839128875
- type: nauc_mrr_at_1_std
value: -23.12219651563944
- type: nauc_mrr_at_20_diff1
value: 71.74174120635641
- type: nauc_mrr_at_20_max
value: 3.929695014596715
- type: nauc_mrr_at_20_std
value: -24.492801146396122
- type: nauc_mrr_at_3_diff1
value: 71.6212411128049
- type: nauc_mrr_at_3_max
value: 4.227925028200142
- type: nauc_mrr_at_3_std
value: -25.64285955172264
- type: nauc_mrr_at_5_diff1
value: 71.80132592467288
- type: nauc_mrr_at_5_max
value: 4.1553514465112995
- type: nauc_mrr_at_5_std
value: -24.93394619376225
- type: nauc_ndcg_at_1000_diff1
value: 53.6216140857924
- type: nauc_ndcg_at_1000_max
value: 8.199696972556648
- type: nauc_ndcg_at_1000_std
value: -12.848833254863706
- type: nauc_ndcg_at_100_diff1
value: 52.4771074390175
- type: nauc_ndcg_at_100_max
value: 8.266327098153694
- type: nauc_ndcg_at_100_std
value: -12.141877748527016
- type: nauc_ndcg_at_10_diff1
value: 50.39079678583025
- type: nauc_ndcg_at_10_max
value: 8.460346209587346
- type: nauc_ndcg_at_10_std
value: -11.739805102684473
- type: nauc_ndcg_at_1_diff1
value: 72.45922013700734
- type: nauc_ndcg_at_1_max
value: 1.8703455839128875
- type: nauc_ndcg_at_1_std
value: -23.12219651563944
- type: nauc_ndcg_at_20_diff1
value: 51.17449748619954
- type: nauc_ndcg_at_20_max
value: 8.560656277843842
- type: nauc_ndcg_at_20_std
value: -11.721957002532669
- type: nauc_ndcg_at_3_diff1
value: 51.697701767290724
- type: nauc_ndcg_at_3_max
value: 7.949689650260239
- type: nauc_ndcg_at_3_std
value: -15.497849863574933
- type: nauc_ndcg_at_5_diff1
value: 50.49788213345009
- type: nauc_ndcg_at_5_max
value: 8.380898947808362
- type: nauc_ndcg_at_5_std
value: -13.119756502356564
- type: nauc_precision_at_1000_diff1
value: -4.321234329511238
- type: nauc_precision_at_1000_max
value: 4.842614825492312
- type: nauc_precision_at_1000_std
value: 3.517128181017838
- type: nauc_precision_at_100_diff1
value: -7.201118735439735
- type: nauc_precision_at_100_max
value: 6.529523563838742
- type: nauc_precision_at_100_std
value: 7.106363711097527
- type: nauc_precision_at_10_diff1
value: -9.482064191334755
- type: nauc_precision_at_10_max
value: 10.994306197736153
- type: nauc_precision_at_10_std
value: 9.958273491520254
- type: nauc_precision_at_1_diff1
value: 72.45922013700734
- type: nauc_precision_at_1_max
value: 1.8703455839128875
- type: nauc_precision_at_1_std
value: -23.12219651563944
- type: nauc_precision_at_20_diff1
value: -9.380072735429245
- type: nauc_precision_at_20_max
value: 9.856465558009173
- type: nauc_precision_at_20_std
value: 9.131673380453492
- type: nauc_precision_at_3_diff1
value: 9.586710337314623
- type: nauc_precision_at_3_max
value: 14.740209113800102
- type: nauc_precision_at_3_std
value: -3.891333715748583
- type: nauc_precision_at_5_diff1
value: -3.998520236788054
- type: nauc_precision_at_5_max
value: 13.422868860819156
- type: nauc_precision_at_5_std
value: 6.108452997840511
- type: nauc_recall_at_1000_diff1
value: 3.385758105150115
- type: nauc_recall_at_1000_max
value: 47.3665730767981
- type: nauc_recall_at_1000_std
value: 56.87746303806031
- type: nauc_recall_at_100_diff1
value: -2.028014907991153
- type: nauc_recall_at_100_max
value: 32.48324188848066
- type: nauc_recall_at_100_std
value: 44.261168385513336
- type: nauc_recall_at_10_diff1
value: 10.768002004459115
- type: nauc_recall_at_10_max
value: 22.566005820537097
- type: nauc_recall_at_10_std
value: 17.40223735419854
- type: nauc_recall_at_1_diff1
value: 56.03047128824491
- type: nauc_recall_at_1_max
value: 1.9657828096288057
- type: nauc_recall_at_1_std
value: -16.09258344775108
- type: nauc_recall_at_20_diff1
value: 6.801138990752192
- type: nauc_recall_at_20_max
value: 26.58420813169432
- type: nauc_recall_at_20_std
value: 25.593452124921424
- type: nauc_recall_at_3_diff1
value: 28.43603012844233
- type: nauc_recall_at_3_max
value: 13.635019609839791
- type: nauc_recall_at_3_std
value: -7.307728685928379
- type: nauc_recall_at_5_diff1
value: 19.599627188133983
- type: nauc_recall_at_5_max
value: 17.90056850206721
- type: nauc_recall_at_5_std
value: 3.353861530030554
- type: ndcg_at_1
value: 84.773
- type: ndcg_at_10
value: 89.917
- type: ndcg_at_100
value: 90.577
- type: ndcg_at_1000
value: 90.739
- type: ndcg_at_20
value: 90.22200000000001
- type: ndcg_at_3
value: 88.601
- type: ndcg_at_5
value: 89.35499999999999
- type: precision_at_1
value: 84.773
- type: precision_at_10
value: 10.696
- type: precision_at_100
value: 1.13
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 5.455
- type: precision_at_3
value: 33.663
- type: precision_at_5
value: 20.801
- type: recall_at_1
value: 78.50200000000001
- type: recall_at_10
value: 95.64099999999999
- type: recall_at_100
value: 98.05
- type: recall_at_1000
value: 98.964
- type: recall_at_20
value: 96.619
- type: recall_at_3
value: 92.11500000000001
- type: recall_at_5
value: 94.06
task:
type: Retrieval
- dataset:
config: default
name: MTEB FEVER (default)
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: train
type: mteb/fever
metrics:
- type: main_score
value: 90.021
- type: map_at_1
value: 77.215
- type: map_at_10
value: 86.476
- type: map_at_100
value: 86.761
- type: map_at_1000
value: 86.777
- type: map_at_20
value: 86.644
- type: map_at_3
value: 85.468
- type: map_at_5
value: 86.114
- type: mrr_at_1
value: 85.91202986977507
- type: mrr_at_10
value: 92.10172296159176
- type: mrr_at_100
value: 92.11177503330649
- type: mrr_at_1000
value: 92.11183644281331
- type: mrr_at_20
value: 92.10977698448572
- type: mrr_at_3
value: 91.81556021005755
- type: mrr_at_5
value: 92.04623136933206
- type: nauc_map_at_1000_diff1
value: 37.58072321236068
- type: nauc_map_at_1000_max
value: -6.510278319693357
- type: nauc_map_at_1000_std
value: -18.5792270431547
- type: nauc_map_at_100_diff1
value: 37.52385817661018
- type: nauc_map_at_100_max
value: -6.489982072051949
- type: nauc_map_at_100_std
value: -18.540942037635315
- type: nauc_map_at_10_diff1
value: 36.72584282122918
- type: nauc_map_at_10_max
value: -6.378333016857416
- type: nauc_map_at_10_std
value: -18.334301752515383
- type: nauc_map_at_1_diff1
value: 43.69122799154449
- type: nauc_map_at_1_max
value: -11.63127334717789
- type: nauc_map_at_1_std
value: -20.7658737657603
- type: nauc_map_at_20_diff1
value: 37.15506375729163
- type: nauc_map_at_20_max
value: -6.429970912214997
- type: nauc_map_at_20_std
value: -18.42568919268748
- type: nauc_map_at_3_diff1
value: 36.215420008113746
- type: nauc_map_at_3_max
value: -6.550185095475879
- type: nauc_map_at_3_std
value: -19.166433923188197
- type: nauc_map_at_5_diff1
value: 36.27440671840188
- type: nauc_map_at_5_max
value: -6.295231222513407
- type: nauc_map_at_5_std
value: -18.381810402883904
- type: nauc_mrr_at_1000_diff1
value: 63.48752265792847
- type: nauc_mrr_at_1000_max
value: -19.18676872869155
- type: nauc_mrr_at_1000_std
value: -39.57174458519824
- type: nauc_mrr_at_100_diff1
value: 63.48736991454802
- type: nauc_mrr_at_100_max
value: -19.185964488505324
- type: nauc_mrr_at_100_std
value: -39.571005370486844
- type: nauc_mrr_at_10_diff1
value: 63.496892773682575
- type: nauc_mrr_at_10_max
value: -19.137184489398113
- type: nauc_mrr_at_10_std
value: -39.61121405465908
- type: nauc_mrr_at_1_diff1
value: 63.8931650178703
- type: nauc_mrr_at_1_max
value: -19.13870592744866
- type: nauc_mrr_at_1_std
value: -36.21650937803273
- type: nauc_mrr_at_20_diff1
value: 63.48977631792124
- type: nauc_mrr_at_20_max
value: -19.167118938060913
- type: nauc_mrr_at_20_std
value: -39.57706812851535
- type: nauc_mrr_at_3_diff1
value: 63.32934405332199
- type: nauc_mrr_at_3_max
value: -19.24641986865118
- type: nauc_mrr_at_3_std
value: -40.940129761950985
- type: nauc_mrr_at_5_diff1
value: 63.517348684708644
- type: nauc_mrr_at_5_max
value: -19.11256790994168
- type: nauc_mrr_at_5_std
value: -39.9749657068304
- type: nauc_ndcg_at_1000_diff1
value: 41.076101906247835
- type: nauc_ndcg_at_1000_max
value: -7.226733640213606
- type: nauc_ndcg_at_1000_std
value: -20.509409301747596
- type: nauc_ndcg_at_100_diff1
value: 39.912775071923846
- type: nauc_ndcg_at_100_max
value: -6.6031024308101305
- type: nauc_ndcg_at_100_std
value: -19.488976518418685
- type: nauc_ndcg_at_10_diff1
value: 36.991054890053746
- type: nauc_ndcg_at_10_max
value: -5.703804107983826
- type: nauc_ndcg_at_10_std
value: -18.30890245336646
- type: nauc_ndcg_at_1_diff1
value: 63.8931650178703
- type: nauc_ndcg_at_1_max
value: -19.13870592744866
- type: nauc_ndcg_at_1_std
value: -36.21650937803273
- type: nauc_ndcg_at_20_diff1
value: 38.06195629005128
- type: nauc_ndcg_at_20_max
value: -5.956938984887445
- type: nauc_ndcg_at_20_std
value: -18.55811206090083
- type: nauc_ndcg_at_3_diff1
value: 38.3253264990881
- type: nauc_ndcg_at_3_max
value: -6.160356060424505
- type: nauc_ndcg_at_3_std
value: -21.17644073772092
- type: nauc_ndcg_at_5_diff1
value: 36.81395160037575
- type: nauc_ndcg_at_5_max
value: -5.5184833028226015
- type: nauc_ndcg_at_5_std
value: -18.855728016827573
- type: nauc_precision_at_1000_diff1
value: -1.798023567581113
- type: nauc_precision_at_1000_max
value: 2.075676216126402
- type: nauc_precision_at_1000_std
value: 0.6661076521215061
- type: nauc_precision_at_100_diff1
value: -3.4104407178365914
- type: nauc_precision_at_100_max
value: 4.0047525056348565
- type: nauc_precision_at_100_std
value: 2.9538134117977
- type: nauc_precision_at_10_diff1
value: -7.971971190220629
- type: nauc_precision_at_10_max
value: 5.79095981673231
- type: nauc_precision_at_10_std
value: 2.679701881943801
- type: nauc_precision_at_1_diff1
value: 63.8931650178703
- type: nauc_precision_at_1_max
value: -19.13870592744866
- type: nauc_precision_at_1_std
value: -36.21650937803273
- type: nauc_precision_at_20_diff1
value: -5.97650346358847
- type: nauc_precision_at_20_max
value: 5.356231824212161
- type: nauc_precision_at_20_std
value: 3.3717231487953927
- type: nauc_precision_at_3_diff1
value: -4.338422835263307
- type: nauc_precision_at_3_max
value: 5.225732964596468
- type: nauc_precision_at_3_std
value: -7.216509536122836
- type: nauc_precision_at_5_diff1
value: -8.546583059668556
- type: nauc_precision_at_5_max
value: 6.3921561938488995
- type: nauc_precision_at_5_std
value: 0.14590803478964773
- type: nauc_recall_at_1000_diff1
value: -14.550446134779385
- type: nauc_recall_at_1000_max
value: 40.7272814014902
- type: nauc_recall_at_1000_std
value: 51.09977581242159
- type: nauc_recall_at_100_diff1
value: -9.382110771276123
- type: nauc_recall_at_100_max
value: 29.248829469706678
- type: nauc_recall_at_100_std
value: 35.13007427579197
- type: nauc_recall_at_10_diff1
value: -1.9178724742563424
- type: nauc_recall_at_10_max
value: 17.388506357276793
- type: nauc_recall_at_10_std
value: 14.607463593218906
- type: nauc_recall_at_1_diff1
value: 43.69122799154449
- type: nauc_recall_at_1_max
value: -11.63127334717789
- type: nauc_recall_at_1_std
value: -20.7658737657603
- type: nauc_recall_at_20_diff1
value: -4.360500447701097
- type: nauc_recall_at_20_max
value: 21.02263450303614
- type: nauc_recall_at_20_std
value: 20.999393483063248
- type: nauc_recall_at_3_diff1
value: 11.835627611412372
- type: nauc_recall_at_3_max
value: 6.73026263313079
- type: nauc_recall_at_3_std
value: -6.139330166444412
- type: nauc_recall_at_5_diff1
value: 3.847666226700295
- type: nauc_recall_at_5_max
value: 12.82319379524697
- type: nauc_recall_at_5_std
value: 5.2049518693364165
- type: ndcg_at_1
value: 85.912
- type: ndcg_at_10
value: 90.021
- type: ndcg_at_100
value: 90.807
- type: ndcg_at_1000
value: 91.022
- type: ndcg_at_20
value: 90.36800000000001
- type: ndcg_at_3
value: 88.95100000000001
- type: ndcg_at_5
value: 89.54299999999999
- type: precision_at_1
value: 85.912
- type: precision_at_10
value: 11.17
- type: precision_at_100
value: 1.205
- type: precision_at_1000
value: 0.125
- type: precision_at_20
value: 5.742
- type: precision_at_3
value: 34.993
- type: precision_at_5
value: 21.653
- type: recall_at_1
value: 77.215
- type: recall_at_10
value: 95.27
- type: recall_at_100
value: 97.946
- type: recall_at_1000
value: 99.151
- type: recall_at_20
value: 96.282
- type: recall_at_3
value: 92.061
- type: recall_at_5
value: 93.881
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018 (default)
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: dev
type: mteb/fiqa
metrics:
- type: main_score
value: 46.132
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 38.342999999999996
- type: map_at_100
value: 40.264
- type: map_at_1000
value: 40.43
- type: map_at_20
value: 39.446
- type: map_at_3
value: 33.975
- type: map_at_5
value: 36.434
- type: mrr_at_1
value: 46.800000000000004
- type: mrr_at_10
value: 54.254126984126984
- type: mrr_at_100
value: 54.923209054678026
- type: mrr_at_1000
value: 54.96385524659587
- type: mrr_at_20
value: 54.642069278330894
- type: mrr_at_3
value: 51.96666666666668
- type: mrr_at_5
value: 53.36666666666666
- type: nauc_map_at_1000_diff1
value: 49.841885106876695
- type: nauc_map_at_1000_max
value: 30.36895689778847
- type: nauc_map_at_1000_std
value: 1.7567744666421903
- type: nauc_map_at_100_diff1
value: 49.81372794693455
- type: nauc_map_at_100_max
value: 30.31791638948266
- type: nauc_map_at_100_std
value: 1.7727102636629064
- type: nauc_map_at_10_diff1
value: 49.799159621528446
- type: nauc_map_at_10_max
value: 28.95097185909244
- type: nauc_map_at_10_std
value: -0.2143787100918625
- type: nauc_map_at_1_diff1
value: 52.58007399240151
- type: nauc_map_at_1_max
value: 23.415428952222296
- type: nauc_map_at_1_std
value: -3.4523781889766534
- type: nauc_map_at_20_diff1
value: 49.77759278250616
- type: nauc_map_at_20_max
value: 29.637020999394448
- type: nauc_map_at_20_std
value: 0.9417068184996975
- type: nauc_map_at_3_diff1
value: 50.15320410883135
- type: nauc_map_at_3_max
value: 25.672823727430483
- type: nauc_map_at_3_std
value: -3.6368832994092495
- type: nauc_map_at_5_diff1
value: 49.73253471375265
- type: nauc_map_at_5_max
value: 27.452729712955946
- type: nauc_map_at_5_std
value: -2.597504538318964
- type: nauc_mrr_at_1000_diff1
value: 59.23823771450779
- type: nauc_mrr_at_1000_max
value: 43.689096630807406
- type: nauc_mrr_at_1000_std
value: 6.006395209759317
- type: nauc_mrr_at_100_diff1
value: 59.24508199769832
- type: nauc_mrr_at_100_max
value: 43.707191670788845
- type: nauc_mrr_at_100_std
value: 6.038811740941315
- type: nauc_mrr_at_10_diff1
value: 59.18050290269257
- type: nauc_mrr_at_10_max
value: 43.68703710709348
- type: nauc_mrr_at_10_std
value: 5.920147856790965
- type: nauc_mrr_at_1_diff1
value: 61.23049191214833
- type: nauc_mrr_at_1_max
value: 42.82186697869064
- type: nauc_mrr_at_1_std
value: 5.226665401704537
- type: nauc_mrr_at_20_diff1
value: 59.20345490177547
- type: nauc_mrr_at_20_max
value: 43.71801475513994
- type: nauc_mrr_at_20_std
value: 6.06326305891993
- type: nauc_mrr_at_3_diff1
value: 59.51435687918044
- type: nauc_mrr_at_3_max
value: 42.75973795344299
- type: nauc_mrr_at_3_std
value: 3.7021523288826534
- type: nauc_mrr_at_5_diff1
value: 59.33809476755813
- type: nauc_mrr_at_5_max
value: 43.35457262061369
- type: nauc_mrr_at_5_std
value: 5.133928801400819
- type: nauc_ndcg_at_1000_diff1
value: 52.201491960514424
- type: nauc_ndcg_at_1000_max
value: 36.67184214497183
- type: nauc_ndcg_at_1000_std
value: 7.063547365940826
- type: nauc_ndcg_at_100_diff1
value: 51.6839609303026
- type: nauc_ndcg_at_100_max
value: 36.54239095504816
- type: nauc_ndcg_at_100_std
value: 8.305198443785065
- type: nauc_ndcg_at_10_diff1
value: 51.015102739483666
- type: nauc_ndcg_at_10_max
value: 33.38470092473942
- type: nauc_ndcg_at_10_std
value: 3.4372330157713913
- type: nauc_ndcg_at_1_diff1
value: 61.23049191214833
- type: nauc_ndcg_at_1_max
value: 42.82186697869064
- type: nauc_ndcg_at_1_std
value: 5.226665401704537
- type: nauc_ndcg_at_20_diff1
value: 51.148241453136286
- type: nauc_ndcg_at_20_max
value: 34.415266899737986
- type: nauc_ndcg_at_20_std
value: 5.722948452578717
- type: nauc_ndcg_at_3_diff1
value: 50.183107867516384
- type: nauc_ndcg_at_3_max
value: 31.825660975728017
- type: nauc_ndcg_at_3_std
value: 0.05987477146294962
- type: nauc_ndcg_at_5_diff1
value: 50.27752187238947
- type: nauc_ndcg_at_5_max
value: 31.58055768641312
- type: nauc_ndcg_at_5_std
value: 0.095638813464201
- type: nauc_precision_at_1000_diff1
value: -1.081891577216482
- type: nauc_precision_at_1000_max
value: 22.772384668021623
- type: nauc_precision_at_1000_std
value: 20.37369910022167
- type: nauc_precision_at_100_diff1
value: 4.865265359179138
- type: nauc_precision_at_100_max
value: 28.950539208916727
- type: nauc_precision_at_100_std
value: 27.88929247051143
- type: nauc_precision_at_10_diff1
value: 18.581939701749484
- type: nauc_precision_at_10_max
value: 32.5407981760264
- type: nauc_precision_at_10_std
value: 18.06686305505164
- type: nauc_precision_at_1_diff1
value: 61.23049191214833
- type: nauc_precision_at_1_max
value: 42.82186697869064
- type: nauc_precision_at_1_std
value: 5.226665401704537
- type: nauc_precision_at_20_diff1
value: 12.547121372367496
- type: nauc_precision_at_20_max
value: 30.247027897607875
- type: nauc_precision_at_20_std
value: 23.213776336403853
- type: nauc_precision_at_3_diff1
value: 33.47981633285446
- type: nauc_precision_at_3_max
value: 32.05249666039517
- type: nauc_precision_at_3_std
value: 3.7643758682601813
- type: nauc_precision_at_5_diff1
value: 24.156736607137386
- type: nauc_precision_at_5_max
value: 31.58120543424835
- type: nauc_precision_at_5_std
value: 8.826547060575736
- type: nauc_recall_at_1000_diff1
value: 44.70168791342202
- type: nauc_recall_at_1000_max
value: 40.019041375679365
- type: nauc_recall_at_1000_std
value: 26.28492676001751
- type: nauc_recall_at_100_diff1
value: 38.85858202136479
- type: nauc_recall_at_100_max
value: 35.63673405628285
- type: nauc_recall_at_100_std
value: 26.480426298783005
- type: nauc_recall_at_10_diff1
value: 41.87765017247146
- type: nauc_recall_at_10_max
value: 26.94832721731921
- type: nauc_recall_at_10_std
value: 5.096767252321309
- type: nauc_recall_at_1_diff1
value: 52.58007399240151
- type: nauc_recall_at_1_max
value: 23.415428952222296
- type: nauc_recall_at_1_std
value: -3.4523781889766534
- type: nauc_recall_at_20_diff1
value: 40.31961054933225
- type: nauc_recall_at_20_max
value: 29.149084076136273
- type: nauc_recall_at_20_std
value: 12.080660943653156
- type: nauc_recall_at_3_diff1
value: 44.845037051363235
- type: nauc_recall_at_3_max
value: 22.163030784764484
- type: nauc_recall_at_3_std
value: -5.426325332659164
- type: nauc_recall_at_5_diff1
value: 43.36113793278537
- type: nauc_recall_at_5_max
value: 23.182744951367788
- type: nauc_recall_at_5_std
value: -3.634417407112399
- type: ndcg_at_1
value: 46.800000000000004
- type: ndcg_at_10
value: 46.132
- type: ndcg_at_100
value: 52.410000000000004
- type: ndcg_at_1000
value: 55.057
- type: ndcg_at_20
value: 48.679
- type: ndcg_at_3
value: 42.487
- type: ndcg_at_5
value: 43.586999999999996
- type: precision_at_1
value: 46.800000000000004
- type: precision_at_10
value: 11.74
- type: precision_at_100
value: 1.8419999999999999
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_20
value: 7.07
- type: precision_at_3
value: 26.200000000000003
- type: precision_at_5
value: 19.16
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 52.979
- type: recall_at_100
value: 76.048
- type: recall_at_1000
value: 92.054
- type: recall_at_20
value: 60.624
- type: recall_at_3
value: 38.657000000000004
- type: recall_at_5
value: 44.862
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018 (default)
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: main_score
value: 43.887
- type: map_at_1
value: 21.397
- type: map_at_10
value: 35.811
- type: map_at_100
value: 37.661
- type: map_at_1000
value: 37.839
- type: map_at_20
value: 36.727
- type: map_at_3
value: 31.493
- type: map_at_5
value: 33.992
- type: mrr_at_1
value: 42.74691358024691
- type: mrr_at_10
value: 52.44727366255143
- type: mrr_at_100
value: 53.157106113787755
- type: mrr_at_1000
value: 53.19590692557363
- type: mrr_at_20
value: 52.788702294851234
- type: mrr_at_3
value: 50.231481481481474
- type: mrr_at_5
value: 51.604938271604915
- type: nauc_map_at_1000_diff1
value: 44.99013932786583
- type: nauc_map_at_1000_max
value: 36.2931656288237
- type: nauc_map_at_1000_std
value: 2.744952096504704
- type: nauc_map_at_100_diff1
value: 44.86944269697463
- type: nauc_map_at_100_max
value: 36.18298281198049
- type: nauc_map_at_100_std
value: 2.7487976881234784
- type: nauc_map_at_10_diff1
value: 44.701036690482844
- type: nauc_map_at_10_max
value: 34.91880124794292
- type: nauc_map_at_10_std
value: 1.5099484081332097
- type: nauc_map_at_1_diff1
value: 50.85379952260034
- type: nauc_map_at_1_max
value: 27.394421957915572
- type: nauc_map_at_1_std
value: -3.6437293825619923
- type: nauc_map_at_20_diff1
value: 44.643893347140214
- type: nauc_map_at_20_max
value: 35.78032300474766
- type: nauc_map_at_20_std
value: 2.0540696985077713
- type: nauc_map_at_3_diff1
value: 46.924921206244605
- type: nauc_map_at_3_max
value: 31.95948324092745
- type: nauc_map_at_3_std
value: -0.24644658949620132
- type: nauc_map_at_5_diff1
value: 45.299548947339346
- type: nauc_map_at_5_max
value: 33.560927993044636
- type: nauc_map_at_5_std
value: -0.09229167862135255
- type: nauc_mrr_at_1000_diff1
value: 53.97584579514102
- type: nauc_mrr_at_1000_max
value: 41.39325946543948
- type: nauc_mrr_at_1000_std
value: 2.7797248987216774
- type: nauc_mrr_at_100_diff1
value: 53.95469720996498
- type: nauc_mrr_at_100_max
value: 41.41453164205358
- type: nauc_mrr_at_100_std
value: 2.8260988232101902
- type: nauc_mrr_at_10_diff1
value: 53.72315979312175
- type: nauc_mrr_at_10_max
value: 41.177743822376904
- type: nauc_mrr_at_10_std
value: 2.563267516014612
- type: nauc_mrr_at_1_diff1
value: 57.590727821071155
- type: nauc_mrr_at_1_max
value: 41.635385860154074
- type: nauc_mrr_at_1_std
value: -0.44532344504198534
- type: nauc_mrr_at_20_diff1
value: 53.83801635440246
- type: nauc_mrr_at_20_max
value: 41.28524524541232
- type: nauc_mrr_at_20_std
value: 2.5331225115409577
- type: nauc_mrr_at_3_diff1
value: 54.39722667585212
- type: nauc_mrr_at_3_max
value: 40.54145465851505
- type: nauc_mrr_at_3_std
value: 1.6925912897229027
- type: nauc_mrr_at_5_diff1
value: 53.691867160376816
- type: nauc_mrr_at_5_max
value: 40.94797527156675
- type: nauc_mrr_at_5_std
value: 2.227219454930413
- type: nauc_ndcg_at_1000_diff1
value: 47.28950242475927
- type: nauc_ndcg_at_1000_max
value: 40.558784896965015
- type: nauc_ndcg_at_1000_std
value: 6.916048078136412
- type: nauc_ndcg_at_100_diff1
value: 45.803609057238724
- type: nauc_ndcg_at_100_max
value: 39.9247602434488
- type: nauc_ndcg_at_100_std
value: 8.070013922609293
- type: nauc_ndcg_at_10_diff1
value: 44.601721852568154
- type: nauc_ndcg_at_10_max
value: 36.7523945635637
- type: nauc_ndcg_at_10_std
value: 3.7741680838463916
- type: nauc_ndcg_at_1_diff1
value: 57.590727821071155
- type: nauc_ndcg_at_1_max
value: 41.635385860154074
- type: nauc_ndcg_at_1_std
value: -0.44532344504198534
- type: nauc_ndcg_at_20_diff1
value: 44.84087184273544
- type: nauc_ndcg_at_20_max
value: 38.32125780917691
- type: nauc_ndcg_at_20_std
value: 4.548886454834896
- type: nauc_ndcg_at_3_diff1
value: 46.45102235679583
- type: nauc_ndcg_at_3_max
value: 36.9633250683586
- type: nauc_ndcg_at_3_std
value: 2.369907620024769
- type: nauc_ndcg_at_5_diff1
value: 44.32017759567463
- type: nauc_ndcg_at_5_max
value: 35.90479608408539
- type: nauc_ndcg_at_5_std
value: 1.450222645028762
- type: nauc_precision_at_1000_diff1
value: 1.3454169253303294
- type: nauc_precision_at_1000_max
value: 23.88451750412882
- type: nauc_precision_at_1000_std
value: 12.591204064713308
- type: nauc_precision_at_100_diff1
value: 6.012218731725929
- type: nauc_precision_at_100_max
value: 30.969198659050733
- type: nauc_precision_at_100_std
value: 18.35239521849261
- type: nauc_precision_at_10_diff1
value: 16.908790779236835
- type: nauc_precision_at_10_max
value: 37.080559157562455
- type: nauc_precision_at_10_std
value: 12.110645329690259
- type: nauc_precision_at_1_diff1
value: 57.590727821071155
- type: nauc_precision_at_1_max
value: 41.635385860154074
- type: nauc_precision_at_1_std
value: -0.44532344504198534
- type: nauc_precision_at_20_diff1
value: 12.877352199360345
- type: nauc_precision_at_20_max
value: 37.364422905122815
- type: nauc_precision_at_20_std
value: 13.813344186459652
- type: nauc_precision_at_3_diff1
value: 32.81390693003651
- type: nauc_precision_at_3_max
value: 38.89224188329493
- type: nauc_precision_at_3_std
value: 6.490943672811113
- type: nauc_precision_at_5_diff1
value: 23.31033104699241
- type: nauc_precision_at_5_max
value: 37.026347485355956
- type: nauc_precision_at_5_std
value: 6.082794133847137
- type: nauc_recall_at_1000_diff1
value: 40.21199090930344
- type: nauc_recall_at_1000_max
value: 45.44325141564459
- type: nauc_recall_at_1000_std
value: 39.95206397839652
- type: nauc_recall_at_100_diff1
value: 28.694180171434674
- type: nauc_recall_at_100_max
value: 36.16137724563645
- type: nauc_recall_at_100_std
value: 29.362576415720426
- type: nauc_recall_at_10_diff1
value: 30.82350118152907
- type: nauc_recall_at_10_max
value: 28.84721188763083
- type: nauc_recall_at_10_std
value: 6.871358974808361
- type: nauc_recall_at_1_diff1
value: 50.85379952260034
- type: nauc_recall_at_1_max
value: 27.394421957915572
- type: nauc_recall_at_1_std
value: -3.6437293825619923
- type: nauc_recall_at_20_diff1
value: 30.494672593660365
- type: nauc_recall_at_20_max
value: 32.451452059083
- type: nauc_recall_at_20_std
value: 8.857752757738012
- type: nauc_recall_at_3_diff1
value: 37.98407967492573
- type: nauc_recall_at_3_max
value: 26.531560809821137
- type: nauc_recall_at_3_std
value: 1.2955663995782718
- type: nauc_recall_at_5_diff1
value: 32.84916383815314
- type: nauc_recall_at_5_max
value: 26.621206298631378
- type: nauc_recall_at_5_std
value: 1.6024978706362352
- type: ndcg_at_1
value: 42.747
- type: ndcg_at_10
value: 43.887
- type: ndcg_at_100
value: 50.485
- type: ndcg_at_1000
value: 53.400999999999996
- type: ndcg_at_20
value: 46.098
- type: ndcg_at_3
value: 40.602
- type: ndcg_at_5
value: 41.725
- type: precision_at_1
value: 42.747
- type: precision_at_10
value: 11.991
- type: precision_at_100
value: 1.889
- type: precision_at_1000
value: 0.241
- type: precision_at_20
value: 6.959999999999999
- type: precision_at_3
value: 27.058
- type: precision_at_5
value: 19.814999999999998
- type: recall_at_1
value: 21.397
- type: recall_at_10
value: 50.678
- type: recall_at_100
value: 75.108
- type: recall_at_1000
value: 92.465
- type: recall_at_20
value: 57.474000000000004
- type: recall_at_3
value: 37.391000000000005
- type: recall_at_5
value: 43.566
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018 (default)
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: train
type: mteb/fiqa
metrics:
- type: main_score
value: 45.074
- type: map_at_1
value: 22.921
- type: map_at_10
value: 37.062
- type: map_at_100
value: 38.869
- type: map_at_1000
value: 39.031
- type: map_at_20
value: 38.073
- type: map_at_3
value: 32.482
- type: map_at_5
value: 34.975
- type: mrr_at_1
value: 42.81818181818181
- type: mrr_at_10
value: 52.01925685425696
- type: mrr_at_100
value: 52.76535915811975
- type: mrr_at_1000
value: 52.80323713270641
- type: mrr_at_20
value: 52.46928188075179
- type: mrr_at_3
value: 49.563636363636505
- type: mrr_at_5
value: 51.04090909090917
- type: nauc_map_at_1000_diff1
value: 45.10345424051492
- type: nauc_map_at_1000_max
value: 29.68487371437469
- type: nauc_map_at_1000_std
value: 1.238229479331942
- type: nauc_map_at_100_diff1
value: 45.07560751433321
- type: nauc_map_at_100_max
value: 29.621328097853137
- type: nauc_map_at_100_std
value: 1.1967771682187873
- type: nauc_map_at_10_diff1
value: 44.843509175193056
- type: nauc_map_at_10_max
value: 28.618388907804658
- type: nauc_map_at_10_std
value: -0.17386075400517237
- type: nauc_map_at_1_diff1
value: 49.47111917296565
- type: nauc_map_at_1_max
value: 20.0742470618401
- type: nauc_map_at_1_std
value: -4.129360092632688
- type: nauc_map_at_20_diff1
value: 44.95018685490344
- type: nauc_map_at_20_max
value: 29.150108596298434
- type: nauc_map_at_20_std
value: 0.6249683074740969
- type: nauc_map_at_3_diff1
value: 45.01551197502368
- type: nauc_map_at_3_max
value: 25.1628789711796
- type: nauc_map_at_3_std
value: -3.321515508442981
- type: nauc_map_at_5_diff1
value: 44.91318371210472
- type: nauc_map_at_5_max
value: 27.12198758255798
- type: nauc_map_at_5_std
value: -1.8418885545143031
- type: nauc_mrr_at_1000_diff1
value: 51.02890753099619
- type: nauc_mrr_at_1000_max
value: 37.20699525567458
- type: nauc_mrr_at_1000_std
value: 3.189109744356073
- type: nauc_mrr_at_100_diff1
value: 51.015583067584146
- type: nauc_mrr_at_100_max
value: 37.20945921198165
- type: nauc_mrr_at_100_std
value: 3.2119457438429047
- type: nauc_mrr_at_10_diff1
value: 50.938326208533056
- type: nauc_mrr_at_10_max
value: 37.2328138702086
- type: nauc_mrr_at_10_std
value: 3.1417844227142577
- type: nauc_mrr_at_1_diff1
value: 54.83336818983676
- type: nauc_mrr_at_1_max
value: 35.941190580000395
- type: nauc_mrr_at_1_std
value: 0.11480196188945171
- type: nauc_mrr_at_20_diff1
value: 50.97564795196412
- type: nauc_mrr_at_20_max
value: 37.22205264818766
- type: nauc_mrr_at_20_std
value: 3.2001064750905672
- type: nauc_mrr_at_3_diff1
value: 51.12200690387213
- type: nauc_mrr_at_3_max
value: 36.605143686242045
- type: nauc_mrr_at_3_std
value: 1.9427581254272008
- type: nauc_mrr_at_5_diff1
value: 51.08466836245801
- type: nauc_mrr_at_5_max
value: 37.23852403243883
- type: nauc_mrr_at_5_std
value: 2.7992259556688466
- type: nauc_ndcg_at_1000_diff1
value: 45.6295653089338
- type: nauc_ndcg_at_1000_max
value: 34.25244958857478
- type: nauc_ndcg_at_1000_std
value: 5.968157773281027
- type: nauc_ndcg_at_100_diff1
value: 45.15925091294929
- type: nauc_ndcg_at_100_max
value: 33.77292060148967
- type: nauc_ndcg_at_100_std
value: 6.252767106085369
- type: nauc_ndcg_at_10_diff1
value: 44.63262132249889
- type: nauc_ndcg_at_10_max
value: 31.804054311383613
- type: nauc_ndcg_at_10_std
value: 2.868169824330679
- type: nauc_ndcg_at_1_diff1
value: 54.83336818983676
- type: nauc_ndcg_at_1_max
value: 35.941190580000395
- type: nauc_ndcg_at_1_std
value: 0.11480196188945171
- type: nauc_ndcg_at_20_diff1
value: 44.73531667035927
- type: nauc_ndcg_at_20_max
value: 32.36405405932841
- type: nauc_ndcg_at_20_std
value: 4.234168192043894
- type: nauc_ndcg_at_3_diff1
value: 45.180068719892965
- type: nauc_ndcg_at_3_max
value: 31.144658941814473
- type: nauc_ndcg_at_3_std
value: 0.15981365840386932
- type: nauc_ndcg_at_5_diff1
value: 44.91186731928022
- type: nauc_ndcg_at_5_max
value: 31.097528102462903
- type: nauc_ndcg_at_5_std
value: 1.0978416567636418
- type: nauc_precision_at_1000_diff1
value: 0.10884757461177323
- type: nauc_precision_at_1000_max
value: 22.44073868984244
- type: nauc_precision_at_1000_std
value: 18.425802177787244
- type: nauc_precision_at_100_diff1
value: 8.326770033288243
- type: nauc_precision_at_100_max
value: 29.87121252902087
- type: nauc_precision_at_100_std
value: 22.471637271023955
- type: nauc_precision_at_10_diff1
value: 20.3859018808304
- type: nauc_precision_at_10_max
value: 35.387490020659186
- type: nauc_precision_at_10_std
value: 14.452716344612679
- type: nauc_precision_at_1_diff1
value: 54.83336818983676
- type: nauc_precision_at_1_max
value: 35.941190580000395
- type: nauc_precision_at_1_std
value: 0.11480196188945171
- type: nauc_precision_at_20_diff1
value: 16.24605754303343
- type: nauc_precision_at_20_max
value: 33.818393780875525
- type: nauc_precision_at_20_std
value: 18.42940330763103
- type: nauc_precision_at_3_diff1
value: 31.181315158851408
- type: nauc_precision_at_3_max
value: 35.71839391755647
- type: nauc_precision_at_3_std
value: 4.86245107443907
- type: nauc_precision_at_5_diff1
value: 26.18450860125776
- type: nauc_precision_at_5_max
value: 36.32130007403958
- type: nauc_precision_at_5_std
value: 9.106489600607265
- type: nauc_recall_at_1000_diff1
value: 21.411131898774677
- type: nauc_recall_at_1000_max
value: 34.541893106658605
- type: nauc_recall_at_1000_std
value: 40.6467864769445
- type: nauc_recall_at_100_diff1
value: 28.25747320103834
- type: nauc_recall_at_100_max
value: 29.192936775640888
- type: nauc_recall_at_100_std
value: 22.38141045002714
- type: nauc_recall_at_10_diff1
value: 33.183148689667306
- type: nauc_recall_at_10_max
value: 26.115736478754542
- type: nauc_recall_at_10_std
value: 5.779562369828712
- type: nauc_recall_at_1_diff1
value: 49.47111917296565
- type: nauc_recall_at_1_max
value: 20.0742470618401
- type: nauc_recall_at_1_std
value: -4.129360092632688
- type: nauc_recall_at_20_diff1
value: 31.3273565134318
- type: nauc_recall_at_20_max
value: 26.118667671265268
- type: nauc_recall_at_20_std
value: 10.337063376342904
- type: nauc_recall_at_3_diff1
value: 37.71800914450827
- type: nauc_recall_at_3_max
value: 21.998612117129866
- type: nauc_recall_at_3_std
value: -2.8573409678442667
- type: nauc_recall_at_5_diff1
value: 36.035788981718326
- type: nauc_recall_at_5_max
value: 24.462942381019985
- type: nauc_recall_at_5_std
value: 0.5720741719496573
- type: ndcg_at_1
value: 42.818
- type: ndcg_at_10
value: 45.074
- type: ndcg_at_100
value: 51.405
- type: ndcg_at_1000
value: 54.092
- type: ndcg_at_20
value: 47.555
- type: ndcg_at_3
value: 40.735
- type: ndcg_at_5
value: 42.229
- type: precision_at_1
value: 42.818
- type: precision_at_10
value: 12.110999999999999
- type: precision_at_100
value: 1.876
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_20
value: 7.117999999999999
- type: precision_at_3
value: 26.473000000000003
- type: precision_at_5
value: 19.465
- type: recall_at_1
value: 22.921
- type: recall_at_10
value: 52.942
- type: recall_at_100
value: 76.61200000000001
- type: recall_at_1000
value: 92.793
- type: recall_at_20
value: 60.809999999999995
- type: recall_at_3
value: 37.830999999999996
- type: recall_at_5
value: 44.279
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA (default)
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: dev
type: mteb/hotpotqa
metrics:
- type: main_score
value: 77.35600000000001
- type: map_at_1
value: 42.299
- type: map_at_10
value: 70.006
- type: map_at_100
value: 70.775
- type: map_at_1000
value: 70.82300000000001
- type: map_at_20
value: 70.47099999999999
- type: map_at_3
value: 66.81200000000001
- type: map_at_5
value: 68.85
- type: mrr_at_1
value: 84.59702588580869
- type: mrr_at_10
value: 89.3224608857067
- type: mrr_at_100
value: 89.42205720574383
- type: mrr_at_1000
value: 89.425588995421
- type: mrr_at_20
value: 89.38641899747822
- type: mrr_at_3
value: 88.68184321644944
- type: mrr_at_5
value: 89.0995043143014
- type: nauc_map_at_1000_diff1
value: 11.875767523175762
- type: nauc_map_at_1000_max
value: 23.23376674530728
- type: nauc_map_at_1000_std
value: 18.523605995632938
- type: nauc_map_at_100_diff1
value: 11.85910449749788
- type: nauc_map_at_100_max
value: 23.239547476164876
- type: nauc_map_at_100_std
value: 18.565229607460537
- type: nauc_map_at_10_diff1
value: 11.607663265355745
- type: nauc_map_at_10_max
value: 22.923495646620154
- type: nauc_map_at_10_std
value: 18.030180953748534
- type: nauc_map_at_1_diff1
value: 69.04595571010425
- type: nauc_map_at_1_max
value: 42.68450581268141
- type: nauc_map_at_1_std
value: 3.9078744944302226
- type: nauc_map_at_20_diff1
value: 11.723969128072866
- type: nauc_map_at_20_max
value: 23.11544870270342
- type: nauc_map_at_20_std
value: 18.41858338547983
- type: nauc_map_at_3_diff1
value: 11.195009895256332
- type: nauc_map_at_3_max
value: 21.124864974433763
- type: nauc_map_at_3_std
value: 14.668115105817323
- type: nauc_map_at_5_diff1
value: 11.399725827702468
- type: nauc_map_at_5_max
value: 22.68356071435758
- type: nauc_map_at_5_std
value: 17.006805900547196
- type: nauc_mrr_at_1000_diff1
value: 67.99516710058342
- type: nauc_mrr_at_1000_max
value: 45.15957182658708
- type: nauc_mrr_at_1000_std
value: 5.625688035185145
- type: nauc_mrr_at_100_diff1
value: 68.00038022639141
- type: nauc_mrr_at_100_max
value: 45.1718894878634
- type: nauc_mrr_at_100_std
value: 5.642257978446126
- type: nauc_mrr_at_10_diff1
value: 67.97643955659808
- type: nauc_mrr_at_10_max
value: 45.24875815550117
- type: nauc_mrr_at_10_std
value: 5.6282245777631825
- type: nauc_mrr_at_1_diff1
value: 69.04595571010425
- type: nauc_mrr_at_1_max
value: 42.68450581268141
- type: nauc_mrr_at_1_std
value: 3.9078744944302226
- type: nauc_mrr_at_20_diff1
value: 67.98186373375957
- type: nauc_mrr_at_20_max
value: 45.16955056454227
- type: nauc_mrr_at_20_std
value: 5.643021098296383
- type: nauc_mrr_at_3_diff1
value: 67.74068066479995
- type: nauc_mrr_at_3_max
value: 45.233627819514496
- type: nauc_mrr_at_3_std
value: 5.073903037944697
- type: nauc_mrr_at_5_diff1
value: 67.90073680819802
- type: nauc_mrr_at_5_max
value: 45.28874529948139
- type: nauc_mrr_at_5_std
value: 5.533506436522208
- type: nauc_ndcg_at_1000_diff1
value: 18.45245983930683
- type: nauc_ndcg_at_1000_max
value: 27.416507398330854
- type: nauc_ndcg_at_1000_std
value: 20.799288194838745
- type: nauc_ndcg_at_100_diff1
value: 17.774579523633484
- type: nauc_ndcg_at_100_max
value: 27.484015450724563
- type: nauc_ndcg_at_100_std
value: 21.824361827289604
- type: nauc_ndcg_at_10_diff1
value: 16.454456871906594
- type: nauc_ndcg_at_10_max
value: 26.248157142106788
- type: nauc_ndcg_at_10_std
value: 19.85534143153061
- type: nauc_ndcg_at_1_diff1
value: 69.04595571010425
- type: nauc_ndcg_at_1_max
value: 42.68450581268141
- type: nauc_ndcg_at_1_std
value: 3.9078744944302226
- type: nauc_ndcg_at_20_diff1
value: 16.783596764102448
- type: nauc_ndcg_at_20_max
value: 26.674447936981803
- type: nauc_ndcg_at_20_std
value: 20.955085734378283
- type: nauc_ndcg_at_3_diff1
value: 16.323138577650877
- type: nauc_ndcg_at_3_max
value: 23.919505607419378
- type: nauc_ndcg_at_3_std
value: 14.438155012059939
- type: nauc_ndcg_at_5_diff1
value: 16.252513953720612
- type: nauc_ndcg_at_5_max
value: 25.834906380090715
- type: nauc_ndcg_at_5_std
value: 17.797879786189498
- type: nauc_precision_at_1000_diff1
value: -5.612996021391802
- type: nauc_precision_at_1000_max
value: 29.621124808949475
- type: nauc_precision_at_1000_std
value: 60.2180272898463
- type: nauc_precision_at_100_diff1
value: -0.4655256365736023
- type: nauc_precision_at_100_max
value: 27.863131801262153
- type: nauc_precision_at_100_std
value: 48.24283178268865
- type: nauc_precision_at_10_diff1
value: 1.467484417678075
- type: nauc_precision_at_10_max
value: 23.063996835379925
- type: nauc_precision_at_10_std
value: 30.225428590871395
- type: nauc_precision_at_1_diff1
value: 69.04595571010425
- type: nauc_precision_at_1_max
value: 42.68450581268141
- type: nauc_precision_at_1_std
value: 3.9078744944302226
- type: nauc_precision_at_20_diff1
value: 0.16098170706775244
- type: nauc_precision_at_20_max
value: 23.545698533798383
- type: nauc_precision_at_20_std
value: 35.3738609349459
- type: nauc_precision_at_3_diff1
value: 4.822099897775316
- type: nauc_precision_at_3_max
value: 19.882902254898795
- type: nauc_precision_at_3_std
value: 17.397463603075302
- type: nauc_precision_at_5_diff1
value: 3.1779150794512656
- type: nauc_precision_at_5_max
value: 22.753201773071552
- type: nauc_precision_at_5_std
value: 24.028684632710412
- type: nauc_recall_at_1000_diff1
value: -5.6129960213919
- type: nauc_recall_at_1000_max
value: 29.62112480894871
- type: nauc_recall_at_1000_std
value: 60.2180272898464
- type: nauc_recall_at_100_diff1
value: -0.4655256365738292
- type: nauc_recall_at_100_max
value: 27.863131801261865
- type: nauc_recall_at_100_std
value: 48.24283178268853
- type: nauc_recall_at_10_diff1
value: 1.4674844176780142
- type: nauc_recall_at_10_max
value: 23.063996835379864
- type: nauc_recall_at_10_std
value: 30.225428590871335
- type: nauc_recall_at_1_diff1
value: 69.04595571010425
- type: nauc_recall_at_1_max
value: 42.68450581268141
- type: nauc_recall_at_1_std
value: 3.9078744944302226
- type: nauc_recall_at_20_diff1
value: 0.16098170706756573
- type: nauc_recall_at_20_max
value: 23.545698533798166
- type: nauc_recall_at_20_std
value: 35.37386093494575
- type: nauc_recall_at_3_diff1
value: 4.822099897775345
- type: nauc_recall_at_3_max
value: 19.882902254898895
- type: nauc_recall_at_3_std
value: 17.397463603075416
- type: nauc_recall_at_5_diff1
value: 3.177915079451333
- type: nauc_recall_at_5_max
value: 22.75320177307157
- type: nauc_recall_at_5_std
value: 24.028684632710416
- type: ndcg_at_1
value: 84.597
- type: ndcg_at_10
value: 77.35600000000001
- type: ndcg_at_100
value: 79.84700000000001
- type: ndcg_at_1000
value: 80.739
- type: ndcg_at_20
value: 78.457
- type: ndcg_at_3
value: 73.02499999999999
- type: ndcg_at_5
value: 75.493
- type: precision_at_1
value: 84.597
- type: precision_at_10
value: 16.091
- type: precision_at_100
value: 1.8010000000000002
- type: precision_at_1000
value: 0.192
- type: precision_at_20
value: 8.399
- type: precision_at_3
value: 47.292
- type: precision_at_5
value: 30.318
- type: recall_at_1
value: 42.299
- type: recall_at_10
value: 80.457
- type: recall_at_100
value: 90.03999999999999
- type: recall_at_1000
value: 95.91499999999999
- type: recall_at_20
value: 83.991
- type: recall_at_3
value: 70.938
- type: recall_at_5
value: 75.794
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA (default)
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: main_score
value: 75.62
- type: map_at_1
value: 41.715
- type: map_at_10
value: 67.84400000000001
- type: map_at_100
value: 68.676
- type: map_at_1000
value: 68.72399999999999
- type: map_at_20
value: 68.351
- type: map_at_3
value: 64.332
- type: map_at_5
value: 66.618
- type: mrr_at_1
value: 83.43011478730588
- type: mrr_at_10
value: 88.32890689474063
- type: mrr_at_100
value: 88.45342904155198
- type: mrr_at_1000
value: 88.45692717602427
- type: mrr_at_20
value: 88.41265148599933
- type: mrr_at_3
value: 87.6097231600268
- type: mrr_at_5
value: 88.08102633355813
- type: nauc_map_at_1000_diff1
value: 9.465654364107301
- type: nauc_map_at_1000_max
value: 15.417980238546377
- type: nauc_map_at_1000_std
value: 12.078075854093665
- type: nauc_map_at_100_diff1
value: 9.442359625098023
- type: nauc_map_at_100_max
value: 15.412594933146517
- type: nauc_map_at_100_std
value: 12.110494024932517
- type: nauc_map_at_10_diff1
value: 9.459426708991023
- type: nauc_map_at_10_max
value: 15.311848156939039
- type: nauc_map_at_10_std
value: 11.55461807074889
- type: nauc_map_at_1_diff1
value: 65.05713874046143
- type: nauc_map_at_1_max
value: 39.626722996510665
- type: nauc_map_at_1_std
value: -0.3991780785384316
- type: nauc_map_at_20_diff1
value: 9.328534555998699
- type: nauc_map_at_20_max
value: 15.307575956530108
- type: nauc_map_at_20_std
value: 11.96904723212192
- type: nauc_map_at_3_diff1
value: 8.915324889938061
- type: nauc_map_at_3_max
value: 13.514273119710563
- type: nauc_map_at_3_std
value: 8.332620819223683
- type: nauc_map_at_5_diff1
value: 8.63645860950366
- type: nauc_map_at_5_max
value: 14.350213952951254
- type: nauc_map_at_5_std
value: 10.554511015067682
- type: nauc_mrr_at_1000_diff1
value: 64.29376507350443
- type: nauc_mrr_at_1000_max
value: 42.432971323016226
- type: nauc_mrr_at_1000_std
value: 1.103214916935443
- type: nauc_mrr_at_100_diff1
value: 64.29483641804482
- type: nauc_mrr_at_100_max
value: 42.438961831187314
- type: nauc_mrr_at_100_std
value: 1.108904601847414
- type: nauc_mrr_at_10_diff1
value: 64.31510468330697
- type: nauc_mrr_at_10_max
value: 42.52427399840782
- type: nauc_mrr_at_10_std
value: 1.131217952433522
- type: nauc_mrr_at_1_diff1
value: 65.05713874046143
- type: nauc_mrr_at_1_max
value: 39.626722996510665
- type: nauc_mrr_at_1_std
value: -0.3991780785384316
- type: nauc_mrr_at_20_diff1
value: 64.28943699159083
- type: nauc_mrr_at_20_max
value: 42.48416850113432
- type: nauc_mrr_at_20_std
value: 1.1557131772785048
- type: nauc_mrr_at_3_diff1
value: 63.94398567446783
- type: nauc_mrr_at_3_max
value: 42.543599757686565
- type: nauc_mrr_at_3_std
value: 0.8656592208469659
- type: nauc_mrr_at_5_diff1
value: 64.26440164249783
- type: nauc_mrr_at_5_max
value: 42.76831128910234
- type: nauc_mrr_at_5_std
value: 0.9815638280513239
- type: nauc_ndcg_at_1000_diff1
value: 15.819261980172072
- type: nauc_ndcg_at_1000_max
value: 20.40080036519792
- type: nauc_ndcg_at_1000_std
value: 14.437662972269072
- type: nauc_ndcg_at_100_diff1
value: 14.934115203495086
- type: nauc_ndcg_at_100_max
value: 20.17258598061381
- type: nauc_ndcg_at_100_std
value: 15.368792248125951
- type: nauc_ndcg_at_10_diff1
value: 14.601053630285463
- type: nauc_ndcg_at_10_max
value: 19.4487220332248
- type: nauc_ndcg_at_10_std
value: 13.167535068795317
- type: nauc_ndcg_at_1_diff1
value: 65.05713874046143
- type: nauc_ndcg_at_1_max
value: 39.626722996510665
- type: nauc_ndcg_at_1_std
value: -0.3991780785384316
- type: nauc_ndcg_at_20_diff1
value: 14.179531301272236
- type: nauc_ndcg_at_20_max
value: 19.472746452573293
- type: nauc_ndcg_at_20_std
value: 14.501827055912294
- type: nauc_ndcg_at_3_diff1
value: 14.108042690817394
- type: nauc_ndcg_at_3_max
value: 16.987464708832828
- type: nauc_ndcg_at_3_std
value: 8.179470755035126
- type: nauc_ndcg_at_5_diff1
value: 13.385764378384962
- type: nauc_ndcg_at_5_max
value: 17.933522110142857
- type: nauc_ndcg_at_5_std
value: 11.19858703808597
- type: nauc_precision_at_1000_diff1
value: -11.509824758756242
- type: nauc_precision_at_1000_max
value: 22.55648484580021
- type: nauc_precision_at_1000_std
value: 52.19288714530133
- type: nauc_precision_at_100_diff1
value: -7.139163153266277
- type: nauc_precision_at_100_max
value: 18.186960433502737
- type: nauc_precision_at_100_std
value: 41.56352667223246
- type: nauc_precision_at_10_diff1
value: 0.19926178236397488
- type: nauc_precision_at_10_max
value: 15.790669273945133
- type: nauc_precision_at_10_std
value: 22.227701276074303
- type: nauc_precision_at_1_diff1
value: 65.05713874046143
- type: nauc_precision_at_1_max
value: 39.626722996510665
- type: nauc_precision_at_1_std
value: -0.3991780785384316
- type: nauc_precision_at_20_diff1
value: -3.7308762969820637
- type: nauc_precision_at_20_max
value: 15.252245858128093
- type: nauc_precision_at_20_std
value: 28.673602701400558
- type: nauc_precision_at_3_diff1
value: 2.200279758618242
- type: nauc_precision_at_3_max
value: 12.01603816399143
- type: nauc_precision_at_3_std
value: 10.776563947053933
- type: nauc_precision_at_5_diff1
value: -0.656454595582822
- type: nauc_precision_at_5_max
value: 12.954740919197965
- type: nauc_precision_at_5_std
value: 16.594853377568537
- type: nauc_recall_at_1000_diff1
value: -11.50982475875598
- type: nauc_recall_at_1000_max
value: 22.55648484580021
- type: nauc_recall_at_1000_std
value: 52.19288714530176
- type: nauc_recall_at_100_diff1
value: -7.139163153266106
- type: nauc_recall_at_100_max
value: 18.186960433502737
- type: nauc_recall_at_100_std
value: 41.56352667223245
- type: nauc_recall_at_10_diff1
value: 0.19926178236406988
- type: nauc_recall_at_10_max
value: 15.790669273945342
- type: nauc_recall_at_10_std
value: 22.22770127607443
- type: nauc_recall_at_1_diff1
value: 65.05713874046143
- type: nauc_recall_at_1_max
value: 39.626722996510665
- type: nauc_recall_at_1_std
value: -0.3991780785384316
- type: nauc_recall_at_20_diff1
value: -3.7308762969819664
- type: nauc_recall_at_20_max
value: 15.252245858128083
- type: nauc_recall_at_20_std
value: 28.673602701400608
- type: nauc_recall_at_3_diff1
value: 2.200279758618139
- type: nauc_recall_at_3_max
value: 12.016038163991432
- type: nauc_recall_at_3_std
value: 10.776563947053829
- type: nauc_recall_at_5_diff1
value: -0.6564545955828385
- type: nauc_recall_at_5_max
value: 12.954740919197997
- type: nauc_recall_at_5_std
value: 16.59485337756855
- type: ndcg_at_1
value: 83.43
- type: ndcg_at_10
value: 75.62
- type: ndcg_at_100
value: 78.365
- type: ndcg_at_1000
value: 79.278
- type: ndcg_at_20
value: 76.831
- type: ndcg_at_3
value: 70.86200000000001
- type: ndcg_at_5
value: 73.64
- type: precision_at_1
value: 83.43
- type: precision_at_10
value: 15.776000000000002
- type: precision_at_100
value: 1.79
- type: precision_at_1000
value: 0.191
- type: precision_at_20
value: 8.276
- type: precision_at_3
value: 45.631
- type: precision_at_5
value: 29.572
- type: recall_at_1
value: 41.715
- type: recall_at_10
value: 78.879
- type: recall_at_100
value: 89.507
- type: recall_at_1000
value: 95.537
- type: recall_at_20
value: 82.762
- type: recall_at_3
value: 68.447
- type: recall_at_5
value: 73.92999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA (default)
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: train
type: mteb/hotpotqa
metrics:
- type: main_score
value: 77.837
- type: map_at_1
value: 42.368
- type: map_at_10
value: 70.482
- type: map_at_100
value: 71.25399999999999
- type: map_at_1000
value: 71.3
- type: map_at_20
value: 70.951
- type: map_at_3
value: 67.094
- type: map_at_5
value: 69.28699999999999
- type: mrr_at_1
value: 84.73647058823529
- type: mrr_at_10
value: 89.43228011204313
- type: mrr_at_100
value: 89.53538640990537
- type: mrr_at_1000
value: 89.53820110602267
- type: mrr_at_20
value: 89.5025639405047
- type: mrr_at_3
value: 88.76078431372584
- type: mrr_at_5
value: 89.21313725490114
- type: nauc_map_at_1000_diff1
value: 12.622422238298029
- type: nauc_map_at_1000_max
value: 24.134646613977147
- type: nauc_map_at_1000_std
value: 18.559113679096974
- type: nauc_map_at_100_diff1
value: 12.595518910984365
- type: nauc_map_at_100_max
value: 24.13615988100401
- type: nauc_map_at_100_std
value: 18.594772743956266
- type: nauc_map_at_10_diff1
value: 12.31736038153525
- type: nauc_map_at_10_max
value: 23.887804934291093
- type: nauc_map_at_10_std
value: 18.137521899470006
- type: nauc_map_at_1_diff1
value: 68.72516447237027
- type: nauc_map_at_1_max
value: 44.3569136727875
- type: nauc_map_at_1_std
value: 6.39841495768188
- type: nauc_map_at_20_diff1
value: 12.468069986147025
- type: nauc_map_at_20_max
value: 24.078546039077274
- type: nauc_map_at_20_std
value: 18.522291511348463
- type: nauc_map_at_3_diff1
value: 11.842231338011665
- type: nauc_map_at_3_max
value: 22.112542722165667
- type: nauc_map_at_3_std
value: 14.832260061022543
- type: nauc_map_at_5_diff1
value: 12.034798052329245
- type: nauc_map_at_5_max
value: 23.31731384989271
- type: nauc_map_at_5_std
value: 17.01434920419027
- type: nauc_mrr_at_1000_diff1
value: 68.07028540743218
- type: nauc_mrr_at_1000_max
value: 47.244151670522704
- type: nauc_mrr_at_1000_std
value: 9.103356279698557
- type: nauc_mrr_at_100_diff1
value: 68.07124406272081
- type: nauc_mrr_at_100_max
value: 47.251355072908616
- type: nauc_mrr_at_100_std
value: 9.114544406098922
- type: nauc_mrr_at_10_diff1
value: 68.05566531720568
- type: nauc_mrr_at_10_max
value: 47.34781296160981
- type: nauc_mrr_at_10_std
value: 9.162073165810337
- type: nauc_mrr_at_1_diff1
value: 68.72516447237027
- type: nauc_mrr_at_1_max
value: 44.3569136727875
- type: nauc_mrr_at_1_std
value: 6.39841495768188
- type: nauc_mrr_at_20_diff1
value: 68.06579079523253
- type: nauc_mrr_at_20_max
value: 47.29519256825747
- type: nauc_mrr_at_20_std
value: 9.157454906021048
- type: nauc_mrr_at_3_diff1
value: 67.86665880252679
- type: nauc_mrr_at_3_max
value: 47.32534131711564
- type: nauc_mrr_at_3_std
value: 8.794606309056801
- type: nauc_mrr_at_5_diff1
value: 68.01593510697437
- type: nauc_mrr_at_5_max
value: 47.43102895637358
- type: nauc_mrr_at_5_std
value: 9.090489695071675
- type: nauc_ndcg_at_1000_diff1
value: 19.409351180430658
- type: nauc_ndcg_at_1000_max
value: 28.708136310658155
- type: nauc_ndcg_at_1000_std
value: 21.135251598909345
- type: nauc_ndcg_at_100_diff1
value: 18.544111410209364
- type: nauc_ndcg_at_100_max
value: 28.691312106667215
- type: nauc_ndcg_at_100_std
value: 22.159472487586196
- type: nauc_ndcg_at_10_diff1
value: 17.18622230783884
- type: nauc_ndcg_at_10_max
value: 27.61517105165476
- type: nauc_ndcg_at_10_std
value: 20.381795917366187
- type: nauc_ndcg_at_1_diff1
value: 68.72516447237027
- type: nauc_ndcg_at_1_max
value: 44.3569136727875
- type: nauc_ndcg_at_1_std
value: 6.39841495768188
- type: nauc_ndcg_at_20_diff1
value: 17.621217561108292
- type: nauc_ndcg_at_20_max
value: 28.220217881192745
- type: nauc_ndcg_at_20_std
value: 21.634321155851048
- type: nauc_ndcg_at_3_diff1
value: 16.95281740780042
- type: nauc_ndcg_at_3_max
value: 25.139541410129908
- type: nauc_ndcg_at_3_std
value: 15.071626218489095
- type: nauc_ndcg_at_5_diff1
value: 16.85509256640343
- type: nauc_ndcg_at_5_max
value: 26.62380882436261
- type: nauc_ndcg_at_5_std
value: 18.144940484549487
- type: nauc_precision_at_1000_diff1
value: -2.498904728204529
- type: nauc_precision_at_1000_max
value: 33.673710106830924
- type: nauc_precision_at_1000_std
value: 60.30188328802003
- type: nauc_precision_at_100_diff1
value: -0.708165353412955
- type: nauc_precision_at_100_max
value: 29.52115017710721
- type: nauc_precision_at_100_std
value: 49.19453346494841
- type: nauc_precision_at_10_diff1
value: 2.2783774953634794
- type: nauc_precision_at_10_max
value: 24.999953606470182
- type: nauc_precision_at_10_std
value: 30.42307537842161
- type: nauc_precision_at_1_diff1
value: 68.72516447237027
- type: nauc_precision_at_1_max
value: 44.3569136727875
- type: nauc_precision_at_1_std
value: 6.39841495768188
- type: nauc_precision_at_20_diff1
value: 1.1464298366823311
- type: nauc_precision_at_20_max
value: 26.511392023129375
- type: nauc_precision_at_20_std
value: 36.70867843499613
- type: nauc_precision_at_3_diff1
value: 5.688601758765791
- type: nauc_precision_at_3_max
value: 21.188583258128727
- type: nauc_precision_at_3_std
value: 17.592622457537157
- type: nauc_precision_at_5_diff1
value: 3.77247674190975
- type: nauc_precision_at_5_max
value: 23.106552905037606
- type: nauc_precision_at_5_std
value: 23.561612818949644
- type: nauc_recall_at_1000_diff1
value: -2.498904728204562
- type: nauc_recall_at_1000_max
value: 33.67371010683099
- type: nauc_recall_at_1000_std
value: 60.301883288019994
- type: nauc_recall_at_100_diff1
value: -0.7081653534129272
- type: nauc_recall_at_100_max
value: 29.52115017710731
- type: nauc_recall_at_100_std
value: 49.194533464948535
- type: nauc_recall_at_10_diff1
value: 2.2783774953635603
- type: nauc_recall_at_10_max
value: 24.999953606470118
- type: nauc_recall_at_10_std
value: 30.423075378421586
- type: nauc_recall_at_1_diff1
value: 68.72516447237027
- type: nauc_recall_at_1_max
value: 44.3569136727875
- type: nauc_recall_at_1_std
value: 6.39841495768188
- type: nauc_recall_at_20_diff1
value: 1.146429836682064
- type: nauc_recall_at_20_max
value: 26.5113920231293
- type: nauc_recall_at_20_std
value: 36.70867843499605
- type: nauc_recall_at_3_diff1
value: 5.688601758765744
- type: nauc_recall_at_3_max
value: 21.18858325812871
- type: nauc_recall_at_3_std
value: 17.592622457537157
- type: nauc_recall_at_5_diff1
value: 3.7724767419099234
- type: nauc_recall_at_5_max
value: 23.106552905037674
- type: nauc_recall_at_5_std
value: 23.561612818949783
- type: ndcg_at_1
value: 84.736
- type: ndcg_at_10
value: 77.837
- type: ndcg_at_100
value: 80.357
- type: ndcg_at_1000
value: 81.183
- type: ndcg_at_20
value: 78.949
- type: ndcg_at_3
value: 73.258
- type: ndcg_at_5
value: 75.919
- type: precision_at_1
value: 84.736
- type: precision_at_10
value: 16.250999999999998
- type: precision_at_100
value: 1.82
- type: precision_at_1000
value: 0.193
- type: precision_at_20
value: 8.482000000000001
- type: precision_at_3
value: 47.475
- type: precision_at_5
value: 30.581999999999997
- type: recall_at_1
value: 42.368
- type: recall_at_10
value: 81.255
- type: recall_at_100
value: 90.994
- type: recall_at_1000
value: 96.398
- type: recall_at_20
value: 84.824
- type: recall_at_3
value: 71.21300000000001
- type: recall_at_5
value: 76.456
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO (default)
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: main_score
value: 43.462
- type: map_at_1
value: 23.25
- type: map_at_10
value: 36.224000000000004
- type: map_at_100
value: 37.349
- type: map_at_1000
value: 37.391999999999996
- type: map_at_20
value: 36.921
- type: map_at_3
value: 32.208
- type: map_at_5
value: 34.573
- type: mrr_at_1
value: 23.88252148997135
- type: mrr_at_10
value: 36.85216832673849
- type: mrr_at_100
value: 37.90739898332828
- type: mrr_at_1000
value: 37.94515095895543
- type: mrr_at_20
value: 37.51240671241301
- type: mrr_at_3
value: 32.91786055396362
- type: mrr_at_5
value: 35.23304680038204
- type: nauc_map_at_1000_diff1
value: 36.39047949939039
- type: nauc_map_at_1000_max
value: 2.3578743172188035
- type: nauc_map_at_1000_std
value: -18.727873389577592
- type: nauc_map_at_100_diff1
value: 36.384143241496226
- type: nauc_map_at_100_max
value: 2.3497513932749614
- type: nauc_map_at_100_std
value: -18.70122938038941
- type: nauc_map_at_10_diff1
value: 36.33329278355692
- type: nauc_map_at_10_max
value: 2.138450676545341
- type: nauc_map_at_10_std
value: -19.45579958491671
- type: nauc_map_at_1_diff1
value: 39.404102475568564
- type: nauc_map_at_1_max
value: 2.7206579628418126
- type: nauc_map_at_1_std
value: -16.855247645496085
- type: nauc_map_at_20_diff1
value: 36.302767883282456
- type: nauc_map_at_20_max
value: 2.2735066233134695
- type: nauc_map_at_20_std
value: -18.973295136131522
- type: nauc_map_at_3_diff1
value: 36.56553095724739
- type: nauc_map_at_3_max
value: 2.3275087952103526
- type: nauc_map_at_3_std
value: -19.3527032157449
- type: nauc_map_at_5_diff1
value: 36.40211831532397
- type: nauc_map_at_5_max
value: 2.235741458377666
- type: nauc_map_at_5_std
value: -19.701014659193824
- type: nauc_mrr_at_1000_diff1
value: 36.438574231588525
- type: nauc_mrr_at_1000_max
value: 2.485811765062565
- type: nauc_mrr_at_1000_std
value: -18.5317957659061
- type: nauc_mrr_at_100_diff1
value: 36.432843922329596
- type: nauc_mrr_at_100_max
value: 2.4824945841823816
- type: nauc_mrr_at_100_std
value: -18.50245936037501
- type: nauc_mrr_at_10_diff1
value: 36.37249341280693
- type: nauc_mrr_at_10_max
value: 2.3153304860037607
- type: nauc_mrr_at_10_std
value: -19.22693970447962
- type: nauc_mrr_at_1_diff1
value: 39.38128062971168
- type: nauc_mrr_at_1_max
value: 2.7209494702622874
- type: nauc_mrr_at_1_std
value: -16.953692595799737
- type: nauc_mrr_at_20_diff1
value: 36.3579490781177
- type: nauc_mrr_at_20_max
value: 2.4387677123377283
- type: nauc_mrr_at_20_std
value: -18.732976355263567
- type: nauc_mrr_at_3_diff1
value: 36.533228792596574
- type: nauc_mrr_at_3_max
value: 2.361606755695883
- type: nauc_mrr_at_3_std
value: -19.245211696661034
- type: nauc_mrr_at_5_diff1
value: 36.3816321319283
- type: nauc_mrr_at_5_max
value: 2.3437756296821632
- type: nauc_mrr_at_5_std
value: -19.471789402286344
- type: nauc_ndcg_at_1000_diff1
value: 35.79039219929976
- type: nauc_ndcg_at_1000_max
value: 2.811728033687246
- type: nauc_ndcg_at_1000_std
value: -17.338286061955813
- type: nauc_ndcg_at_100_diff1
value: 35.59261399719066
- type: nauc_ndcg_at_100_max
value: 2.7108910063207783
- type: nauc_ndcg_at_100_std
value: -16.30247877675029
- type: nauc_ndcg_at_10_diff1
value: 35.33021934007167
- type: nauc_ndcg_at_10_max
value: 1.8215726138615624
- type: nauc_ndcg_at_10_std
value: -20.06278292037688
- type: nauc_ndcg_at_1_diff1
value: 39.38128062971168
- type: nauc_ndcg_at_1_max
value: 2.7209494702622874
- type: nauc_ndcg_at_1_std
value: -16.953692595799737
- type: nauc_ndcg_at_20_diff1
value: 35.166139885264435
- type: nauc_ndcg_at_20_max
value: 2.2458844698840195
- type: nauc_ndcg_at_20_std
value: -18.248706272894776
- type: nauc_ndcg_at_3_diff1
value: 35.815749048912664
- type: nauc_ndcg_at_3_max
value: 2.138161873272173
- type: nauc_ndcg_at_3_std
value: -20.118216970119295
- type: nauc_ndcg_at_5_diff1
value: 35.55268589882809
- type: nauc_ndcg_at_5_max
value: 2.0174915835937095
- type: nauc_ndcg_at_5_std
value: -20.691081813335547
- type: nauc_precision_at_1000_diff1
value: -3.3391122943171885
- type: nauc_precision_at_1000_max
value: 11.198425802216269
- type: nauc_precision_at_1000_std
value: 13.383104359443937
- type: nauc_precision_at_100_diff1
value: 12.850391114610302
- type: nauc_precision_at_100_max
value: 8.157136543556543
- type: nauc_precision_at_100_std
value: 16.476563311300353
- type: nauc_precision_at_10_diff1
value: 28.63945922218073
- type: nauc_precision_at_10_max
value: 0.455900949813612
- type: nauc_precision_at_10_std
value: -20.77018206831735
- type: nauc_precision_at_1_diff1
value: 39.38128062971168
- type: nauc_precision_at_1_max
value: 2.7209494702622874
- type: nauc_precision_at_1_std
value: -16.953692595799737
- type: nauc_precision_at_20_diff1
value: 24.195296149610957
- type: nauc_precision_at_20_max
value: 2.5484785002551718
- type: nauc_precision_at_20_std
value: -10.930465943156257
- type: nauc_precision_at_3_diff1
value: 33.06268024815025
- type: nauc_precision_at_3_max
value: 1.6291541332500454
- type: nauc_precision_at_3_std
value: -22.18898625767765
- type: nauc_precision_at_5_diff1
value: 31.65289218498212
- type: nauc_precision_at_5_max
value: 1.2951472084768743
- type: nauc_precision_at_5_std
value: -23.27704936042841
- type: nauc_recall_at_1000_diff1
value: 23.23177983481788
- type: nauc_recall_at_1000_max
value: 38.7253356088564
- type: nauc_recall_at_1000_std
value: 67.48000156648311
- type: nauc_recall_at_100_diff1
value: 28.544420505491562
- type: nauc_recall_at_100_max
value: 7.671908258293046
- type: nauc_recall_at_100_std
value: 21.858917656037523
- type: nauc_recall_at_10_diff1
value: 31.49652837714782
- type: nauc_recall_at_10_max
value: 0.4106392530350634
- type: nauc_recall_at_10_std
value: -21.78064007132412
- type: nauc_recall_at_1_diff1
value: 39.404102475568564
- type: nauc_recall_at_1_max
value: 2.7206579628418126
- type: nauc_recall_at_1_std
value: -16.855247645496085
- type: nauc_recall_at_20_diff1
value: 29.666357411097906
- type: nauc_recall_at_20_max
value: 1.9441414764681684
- type: nauc_recall_at_20_std
value: -12.932407352213746
- type: nauc_recall_at_3_diff1
value: 33.55593640265306
- type: nauc_recall_at_3_max
value: 1.5516845419621723
- type: nauc_recall_at_3_std
value: -22.119363526106568
- type: nauc_recall_at_5_diff1
value: 32.857815579888154
- type: nauc_recall_at_5_max
value: 1.2405193929536131
- type: nauc_recall_at_5_std
value: -23.542815544770555
- type: ndcg_at_1
value: 23.883
- type: ndcg_at_10
value: 43.462
- type: ndcg_at_100
value: 48.845
- type: ndcg_at_1000
value: 49.883
- type: ndcg_at_20
value: 45.921
- type: ndcg_at_3
value: 35.321999999999996
- type: ndcg_at_5
value: 39.512
- type: precision_at_1
value: 23.883
- type: precision_at_10
value: 6.862
- type: precision_at_100
value: 0.9560000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 3.946
- type: precision_at_3
value: 15.076
- type: precision_at_5
value: 11.158
- type: recall_at_1
value: 23.25
- type: recall_at_10
value: 65.694
- type: recall_at_100
value: 90.554
- type: recall_at_1000
value: 98.378
- type: recall_at_20
value: 75.224
- type: recall_at_3
value: 43.628
- type: recall_at_5
value: 53.659
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO (default)
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: test
type: mteb/msmarco
metrics:
- type: main_score
value: 74.139
- type: map_at_1
value: 2.464
- type: map_at_10
value: 16.541
- type: map_at_100
value: 44.478
- type: map_at_1000
value: 53.15
- type: map_at_20
value: 25.904
- type: map_at_3
value: 6.765000000000001
- type: map_at_5
value: 9.983
- type: mrr_at_1
value: 95.34883720930233
- type: mrr_at_10
value: 97.28682170542636
- type: mrr_at_100
value: 97.28682170542636
- type: mrr_at_1000
value: 97.28682170542636
- type: mrr_at_20
value: 97.28682170542636
- type: mrr_at_3
value: 97.28682170542636
- type: mrr_at_5
value: 97.28682170542636
- type: nauc_map_at_1000_diff1
value: -24.31518623918347
- type: nauc_map_at_1000_max
value: 33.70070261129663
- type: nauc_map_at_1000_std
value: 52.73406144577475
- type: nauc_map_at_100_diff1
value: -6.716075858891885
- type: nauc_map_at_100_max
value: 14.830377435009204
- type: nauc_map_at_100_std
value: 22.182430558548326
- type: nauc_map_at_10_diff1
value: 22.52761274919368
- type: nauc_map_at_10_max
value: -10.100583311291869
- type: nauc_map_at_10_std
value: -24.033121680575295
- type: nauc_map_at_1_diff1
value: 34.97928775395744
- type: nauc_map_at_1_max
value: -29.165988209556343
- type: nauc_map_at_1_std
value: -40.87952221234793
- type: nauc_map_at_20_diff1
value: 15.889296464003886
- type: nauc_map_at_20_max
value: -4.223749887147732
- type: nauc_map_at_20_std
value: -11.765238600018108
- type: nauc_map_at_3_diff1
value: 35.02306731951517
- type: nauc_map_at_3_max
value: -25.811140250024874
- type: nauc_map_at_3_std
value: -37.502121900015425
- type: nauc_map_at_5_diff1
value: 31.60050502637396
- type: nauc_map_at_5_max
value: -19.753939742728406
- type: nauc_map_at_5_std
value: -32.326759394631495
- type: nauc_mrr_at_1000_diff1
value: -1.6109249129507694
- type: nauc_mrr_at_1000_max
value: -5.264078482070403
- type: nauc_mrr_at_1000_std
value: -16.896242659959608
- type: nauc_mrr_at_100_diff1
value: -1.6109249129507694
- type: nauc_mrr_at_100_max
value: -5.264078482070403
- type: nauc_mrr_at_100_std
value: -16.896242659959608
- type: nauc_mrr_at_10_diff1
value: -1.6109249129507694
- type: nauc_mrr_at_10_max
value: -5.264078482070403
- type: nauc_mrr_at_10_std
value: -16.896242659959608
- type: nauc_mrr_at_1_diff1
value: 7.609161311583414
- type: nauc_mrr_at_1_max
value: -3.1385223772769497
- type: nauc_mrr_at_1_std
value: -28.92678640083504
- type: nauc_mrr_at_20_diff1
value: -1.6109249129507694
- type: nauc_mrr_at_20_max
value: -5.264078482070403
- type: nauc_mrr_at_20_std
value: -16.896242659959608
- type: nauc_mrr_at_3_diff1
value: -1.6109249129507694
- type: nauc_mrr_at_3_max
value: -5.264078482070403
- type: nauc_mrr_at_3_std
value: -16.896242659959608
- type: nauc_mrr_at_5_diff1
value: -1.6109249129507694
- type: nauc_mrr_at_5_max
value: -5.264078482070403
- type: nauc_mrr_at_5_std
value: -16.896242659959608
- type: nauc_ndcg_at_1000_diff1
value: -30.3495925805214
- type: nauc_ndcg_at_1000_max
value: 48.80276747021238
- type: nauc_ndcg_at_1000_std
value: 54.598664753311596
- type: nauc_ndcg_at_100_diff1
value: -21.4043832806614
- type: nauc_ndcg_at_100_max
value: 30.876451567336744
- type: nauc_ndcg_at_100_std
value: 49.443818028199324
- type: nauc_ndcg_at_10_diff1
value: -0.45843729874817324
- type: nauc_ndcg_at_10_max
value: 19.369035024488383
- type: nauc_ndcg_at_10_std
value: 15.441351418216314
- type: nauc_ndcg_at_1_diff1
value: 27.57020304062517
- type: nauc_ndcg_at_1_max
value: 13.126334420445016
- type: nauc_ndcg_at_1_std
value: -29.628242116322607
- type: nauc_ndcg_at_20_diff1
value: -15.246366332733999
- type: nauc_ndcg_at_20_max
value: 14.478542591051463
- type: nauc_ndcg_at_20_std
value: 27.20707635200001
- type: nauc_ndcg_at_3_diff1
value: 14.58709456804409
- type: nauc_ndcg_at_3_max
value: 13.824849529705482
- type: nauc_ndcg_at_3_std
value: -8.313833570480671
- type: nauc_ndcg_at_5_diff1
value: 8.91665165479885
- type: nauc_ndcg_at_5_max
value: 13.930708098322576
- type: nauc_ndcg_at_5_std
value: 2.127642899981599
- type: nauc_precision_at_1000_diff1
value: -40.268595202063054
- type: nauc_precision_at_1000_max
value: 25.88884164935188
- type: nauc_precision_at_1000_std
value: 55.568406766964415
- type: nauc_precision_at_100_diff1
value: -42.911915287643346
- type: nauc_precision_at_100_max
value: 30.08901353124011
- type: nauc_precision_at_100_std
value: 62.17803024269468
- type: nauc_precision_at_10_diff1
value: -43.802137487466524
- type: nauc_precision_at_10_max
value: 41.558045207768075
- type: nauc_precision_at_10_std
value: 66.11133414044444
- type: nauc_precision_at_1_diff1
value: 7.609161311583414
- type: nauc_precision_at_1_max
value: -3.1385223772769497
- type: nauc_precision_at_1_std
value: -28.92678640083504
- type: nauc_precision_at_20_diff1
value: -45.342704264263865
- type: nauc_precision_at_20_max
value: 26.376743923651265
- type: nauc_precision_at_20_std
value: 64.3163432020867
- type: nauc_precision_at_3_diff1
value: -16.02113730834142
- type: nauc_precision_at_3_max
value: 24.617646770629815
- type: nauc_precision_at_3_std
value: 35.79299638781981
- type: nauc_precision_at_5_diff1
value: -18.344530395955896
- type: nauc_precision_at_5_max
value: 34.95602706071007
- type: nauc_precision_at_5_std
value: 55.121489979935255
- type: nauc_recall_at_1000_diff1
value: -43.604640987833875
- type: nauc_recall_at_1000_max
value: 58.59201591599778
- type: nauc_recall_at_1000_std
value: 58.04926306248595
- type: nauc_recall_at_100_diff1
value: -1.8581886293054308
- type: nauc_recall_at_100_max
value: 17.598407276190557
- type: nauc_recall_at_100_std
value: 16.1056507235371
- type: nauc_recall_at_10_diff1
value: 24.296861713164493
- type: nauc_recall_at_10_max
value: -12.840082189664468
- type: nauc_recall_at_10_std
value: -27.648232955581015
- type: nauc_recall_at_1_diff1
value: 34.97928775395744
- type: nauc_recall_at_1_max
value: -29.165988209556343
- type: nauc_recall_at_1_std
value: -40.87952221234793
- type: nauc_recall_at_20_diff1
value: 17.34425404446603
- type: nauc_recall_at_20_max
value: -6.759844869600909
- type: nauc_recall_at_20_std
value: -16.34420887019204
- type: nauc_recall_at_3_diff1
value: 35.7400036137557
- type: nauc_recall_at_3_max
value: -26.22669187910205
- type: nauc_recall_at_3_std
value: -38.248247791322314
- type: nauc_recall_at_5_diff1
value: 33.10320420212989
- type: nauc_recall_at_5_max
value: -20.833157601550315
- type: nauc_recall_at_5_std
value: -34.06908006216781
- type: ndcg_at_1
value: 76.744
- type: ndcg_at_10
value: 74.139
- type: ndcg_at_100
value: 68.147
- type: ndcg_at_1000
value: 75.65899999999999
- type: ndcg_at_20
value: 71.788
- type: ndcg_at_3
value: 75.696
- type: ndcg_at_5
value: 74.787
- type: precision_at_1
value: 95.34899999999999
- type: precision_at_10
value: 84.186
- type: precision_at_100
value: 40.163
- type: precision_at_1000
value: 7.457999999999999
- type: precision_at_20
value: 74.767
- type: precision_at_3
value: 89.922
- type: precision_at_5
value: 87.442
- type: recall_at_1
value: 2.464
- type: recall_at_10
value: 17.910999999999998
- type: recall_at_100
value: 55.969
- type: recall_at_1000
value: 82.416
- type: recall_at_20
value: 28.829
- type: recall_at_3
value: 6.866
- type: recall_at_5
value: 10.45
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO (default)
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: train
type: mteb/msmarco
metrics:
- type: main_score
value: 40.276
- type: map_at_1
value: 20.773
- type: map_at_10
value: 33.187
- type: map_at_100
value: 34.445
- type: map_at_1000
value: 34.491
- type: map_at_20
value: 33.969
- type: map_at_3
value: 29.156
- type: map_at_5
value: 31.446
- type: mrr_at_1
value: 21.359250326580362
- type: mrr_at_10
value: 33.705331647898106
- type: mrr_at_100
value: 34.90938915980538
- type: mrr_at_1000
value: 34.949373687506714
- type: mrr_at_20
value: 34.459868257867136
- type: mrr_at_3
value: 29.754569308269037
- type: mrr_at_5
value: 32.00982292750348
- type: nauc_map_at_1000_diff1
value: 34.01601087498396
- type: nauc_map_at_1000_max
value: -1.7691442171563223
- type: nauc_map_at_1000_std
value: -19.828285053003967
- type: nauc_map_at_100_diff1
value: 34.00675015775064
- type: nauc_map_at_100_max
value: -1.7686866050766759
- type: nauc_map_at_100_std
value: -19.794937232515526
- type: nauc_map_at_10_diff1
value: 33.925657930927954
- type: nauc_map_at_10_max
value: -1.9081926342048643
- type: nauc_map_at_10_std
value: -20.459142515845954
- type: nauc_map_at_1_diff1
value: 37.86779004020525
- type: nauc_map_at_1_max
value: -1.693381899018092
- type: nauc_map_at_1_std
value: -18.888409837359983
- type: nauc_map_at_20_diff1
value: 33.95897235069661
- type: nauc_map_at_20_max
value: -1.8385762082257249
- type: nauc_map_at_20_std
value: -20.049973139551135
- type: nauc_map_at_3_diff1
value: 34.1811433717322
- type: nauc_map_at_3_max
value: -1.9862134491651453
- type: nauc_map_at_3_std
value: -20.7157496103899
- type: nauc_map_at_5_diff1
value: 33.945489663762515
- type: nauc_map_at_5_max
value: -1.9633952142297522
- type: nauc_map_at_5_std
value: -20.83632680413325
- type: nauc_mrr_at_1000_diff1
value: 33.999206219812045
- type: nauc_mrr_at_1000_max
value: -1.7412465287451229
- type: nauc_mrr_at_1000_std
value: -19.800789638791937
- type: nauc_mrr_at_100_diff1
value: 33.99041315883828
- type: nauc_mrr_at_100_max
value: -1.7393575325316621
- type: nauc_mrr_at_100_std
value: -19.7676764349925
- type: nauc_mrr_at_10_diff1
value: 33.90510191763504
- type: nauc_mrr_at_10_max
value: -1.8632220774794626
- type: nauc_mrr_at_10_std
value: -20.39043116739617
- type: nauc_mrr_at_1_diff1
value: 37.92957327608907
- type: nauc_mrr_at_1_max
value: -1.6241365807332726
- type: nauc_mrr_at_1_std
value: -19.02476057424658
- type: nauc_mrr_at_20_diff1
value: 33.94188630069156
- type: nauc_mrr_at_20_max
value: -1.799932652089817
- type: nauc_mrr_at_20_std
value: -19.997042702823485
- type: nauc_mrr_at_3_diff1
value: 34.16520468314214
- type: nauc_mrr_at_3_max
value: -1.9279951943420828
- type: nauc_mrr_at_3_std
value: -20.706091936842984
- type: nauc_mrr_at_5_diff1
value: 33.92480963299017
- type: nauc_mrr_at_5_max
value: -1.9122782451155143
- type: nauc_mrr_at_5_std
value: -20.781713634553793
- type: nauc_ndcg_at_1000_diff1
value: 33.184126158160126
- type: nauc_ndcg_at_1000_max
value: -1.1875420124951162
- type: nauc_ndcg_at_1000_std
value: -18.23591819025179
- type: nauc_ndcg_at_100_diff1
value: 32.935688069598314
- type: nauc_ndcg_at_100_max
value: -1.0828464321478635
- type: nauc_ndcg_at_100_std
value: -16.99124635594882
- type: nauc_ndcg_at_10_diff1
value: 32.5885629805019
- type: nauc_ndcg_at_10_max
value: -1.8951992549933268
- type: nauc_ndcg_at_10_std
value: -20.400520136402704
- type: nauc_ndcg_at_1_diff1
value: 37.953966660906325
- type: nauc_ndcg_at_1_max
value: -1.637085728039103
- type: nauc_ndcg_at_1_std
value: -19.029991106168055
- type: nauc_ndcg_at_20_diff1
value: 32.659068964537944
- type: nauc_ndcg_at_20_max
value: -1.6414522913717806
- type: nauc_ndcg_at_20_std
value: -18.857438624779295
- type: nauc_ndcg_at_3_diff1
value: 33.13495243897897
- type: nauc_ndcg_at_3_max
value: -2.056752787606917
- type: nauc_ndcg_at_3_std
value: -21.17861388162733
- type: nauc_ndcg_at_5_diff1
value: 32.69463838392566
- type: nauc_ndcg_at_5_max
value: -2.025092695004754
- type: nauc_ndcg_at_5_std
value: -21.34771429039138
- type: nauc_precision_at_1000_diff1
value: -2.8558032644991016
- type: nauc_precision_at_1000_max
value: 9.86657019787611
- type: nauc_precision_at_1000_std
value: 10.988749489672406
- type: nauc_precision_at_100_diff1
value: 12.864328710169968
- type: nauc_precision_at_100_max
value: 7.464201984721404
- type: nauc_precision_at_100_std
value: 16.13392945907579
- type: nauc_precision_at_10_diff1
value: 26.399898010761824
- type: nauc_precision_at_10_max
value: -1.2999170215959819
- type: nauc_precision_at_10_std
value: -18.71491641617564
- type: nauc_precision_at_1_diff1
value: 37.953966660906325
- type: nauc_precision_at_1_max
value: -1.637085728039103
- type: nauc_precision_at_1_std
value: -19.029991106168055
- type: nauc_precision_at_20_diff1
value: 23.79119509543501
- type: nauc_precision_at_20_max
value: 0.17939408447227603
- type: nauc_precision_at_20_std
value: -10.441178169364324
- type: nauc_precision_at_3_diff1
value: 30.04047755424759
- type: nauc_precision_at_3_max
value: -2.136156697163606
- type: nauc_precision_at_3_std
value: -22.2944352990041
- type: nauc_precision_at_5_diff1
value: 28.422010621063933
- type: nauc_precision_at_5_max
value: -1.9424211602360555
- type: nauc_precision_at_5_std
value: -22.333141313684994
- type: nauc_recall_at_1000_diff1
value: 13.732116062991514
- type: nauc_recall_at_1000_max
value: 45.18551288931526
- type: nauc_recall_at_1000_std
value: 71.21674392317534
- type: nauc_recall_at_100_diff1
value: 24.303127023267894
- type: nauc_recall_at_100_max
value: 8.834243296556114
- type: nauc_recall_at_100_std
value: 23.97303108762705
- type: nauc_recall_at_10_diff1
value: 28.10048351507634
- type: nauc_recall_at_10_max
value: -1.8539512450800857
- type: nauc_recall_at_10_std
value: -19.61933014312325
- type: nauc_recall_at_1_diff1
value: 37.86779004020525
- type: nauc_recall_at_1_max
value: -1.693381899018092
- type: nauc_recall_at_1_std
value: -18.888409837359983
- type: nauc_recall_at_20_diff1
value: 27.298837251716414
- type: nauc_recall_at_20_max
value: -0.6338536811417125
- type: nauc_recall_at_20_std
value: -11.839172034010947
- type: nauc_recall_at_3_diff1
value: 30.29606466428335
- type: nauc_recall_at_3_max
value: -2.286134715776902
- type: nauc_recall_at_3_std
value: -22.284294332227482
- type: nauc_recall_at_5_diff1
value: 29.11776633049639
- type: nauc_recall_at_5_max
value: -2.227765233466803
- type: nauc_recall_at_5_std
value: -22.613701283140504
- type: ndcg_at_1
value: 21.353
- type: ndcg_at_10
value: 40.276
- type: ndcg_at_100
value: 46.323
- type: ndcg_at_1000
value: 47.418
- type: ndcg_at_20
value: 43.053999999999995
- type: ndcg_at_3
value: 32.055
- type: ndcg_at_5
value: 36.138
- type: precision_at_1
value: 21.353
- type: precision_at_10
value: 6.486
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.818
- type: precision_at_3
value: 13.739
- type: precision_at_5
value: 10.309
- type: recall_at_1
value: 20.773
- type: recall_at_10
value: 62.275999999999996
- type: recall_at_100
value: 90.217
- type: recall_at_1000
value: 98.519
- type: recall_at_20
value: 73.072
- type: recall_at_3
value: 39.855000000000004
- type: recall_at_5
value: 49.675999999999995
task:
type: Retrieval
- dataset:
config: default
name: MTEB NFCorpus (default)
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: main_score
value: 37.283
- type: map_at_1
value: 5.574
- type: map_at_10
value: 14.005
- type: map_at_100
value: 17.796
- type: map_at_1000
value: 19.283
- type: map_at_20
value: 15.578
- type: map_at_3
value: 10.236
- type: map_at_5
value: 11.899
- type: mrr_at_1
value: 46.749226006191954
- type: mrr_at_10
value: 56.59811292938226
- type: mrr_at_100
value: 57.12051023998412
- type: mrr_at_1000
value: 57.15371186820038
- type: mrr_at_20
value: 56.916688370838195
- type: mrr_at_3
value: 54.12796697626418
- type: mrr_at_5
value: 55.768833849329205
- type: nauc_map_at_1000_diff1
value: 28.635277848807
- type: nauc_map_at_1000_max
value: 35.35613366796442
- type: nauc_map_at_1000_std
value: 17.10747783924917
- type: nauc_map_at_100_diff1
value: 29.755264219349424
- type: nauc_map_at_100_max
value: 34.327008938244944
- type: nauc_map_at_100_std
value: 13.445288572684394
- type: nauc_map_at_10_diff1
value: 32.48957394170802
- type: nauc_map_at_10_max
value: 27.80407105939758
- type: nauc_map_at_10_std
value: 1.9070818822162425
- type: nauc_map_at_1_diff1
value: 50.027513759193376
- type: nauc_map_at_1_max
value: 19.429910518237936
- type: nauc_map_at_1_std
value: -8.97104145052985
- type: nauc_map_at_20_diff1
value: 31.56634560890853
- type: nauc_map_at_20_max
value: 31.051371548545692
- type: nauc_map_at_20_std
value: 6.504916213964518
- type: nauc_map_at_3_diff1
value: 38.42783943501391
- type: nauc_map_at_3_max
value: 22.268789244002495
- type: nauc_map_at_3_std
value: -3.875096100356707
- type: nauc_map_at_5_diff1
value: 35.358236844401475
- type: nauc_map_at_5_max
value: 23.849302939085845
- type: nauc_map_at_5_std
value: -2.3503635251536994
- type: nauc_mrr_at_1000_diff1
value: 30.746859712785913
- type: nauc_mrr_at_1000_max
value: 53.6904747530386
- type: nauc_mrr_at_1000_std
value: 31.47487691466055
- type: nauc_mrr_at_100_diff1
value: 30.763063585195706
- type: nauc_mrr_at_100_max
value: 53.7250123160408
- type: nauc_mrr_at_100_std
value: 31.50978078188992
- type: nauc_mrr_at_10_diff1
value: 30.82775738393116
- type: nauc_mrr_at_10_max
value: 53.4071427116327
- type: nauc_mrr_at_10_std
value: 31.263564750803962
- type: nauc_mrr_at_1_diff1
value: 32.106085379422524
- type: nauc_mrr_at_1_max
value: 47.77541655844478
- type: nauc_mrr_at_1_std
value: 24.786702037536276
- type: nauc_mrr_at_20_diff1
value: 30.719148309921696
- type: nauc_mrr_at_20_max
value: 53.70017178047271
- type: nauc_mrr_at_20_std
value: 31.467979505375443
- type: nauc_mrr_at_3_diff1
value: 30.981638809404405
- type: nauc_mrr_at_3_max
value: 53.3270677412482
- type: nauc_mrr_at_3_std
value: 30.26681784453818
- type: nauc_mrr_at_5_diff1
value: 30.969579053025992
- type: nauc_mrr_at_5_max
value: 53.700404196240385
- type: nauc_mrr_at_5_std
value: 30.24431182973286
- type: nauc_ndcg_at_1000_diff1
value: 26.077520236345453
- type: nauc_ndcg_at_1000_max
value: 50.44278008464641
- type: nauc_ndcg_at_1000_std
value: 36.462860570166185
- type: nauc_ndcg_at_100_diff1
value: 25.784205218824514
- type: nauc_ndcg_at_100_max
value: 44.6479793696097
- type: nauc_ndcg_at_100_std
value: 29.51865427077206
- type: nauc_ndcg_at_10_diff1
value: 23.20557245363688
- type: nauc_ndcg_at_10_max
value: 42.22895428413661
- type: nauc_ndcg_at_10_std
value: 25.969842351518235
- type: nauc_ndcg_at_1_diff1
value: 33.427281404508435
- type: nauc_ndcg_at_1_max
value: 46.94546610566201
- type: nauc_ndcg_at_1_std
value: 24.496790902482985
- type: nauc_ndcg_at_20_diff1
value: 23.43536419777015
- type: nauc_ndcg_at_20_max
value: 42.0469006433796
- type: nauc_ndcg_at_20_std
value: 27.24688044890543
- type: nauc_ndcg_at_3_diff1
value: 25.933255443748944
- type: nauc_ndcg_at_3_max
value: 45.01703507302794
- type: nauc_ndcg_at_3_std
value: 24.53456197157044
- type: nauc_ndcg_at_5_diff1
value: 24.329950172007088
- type: nauc_ndcg_at_5_max
value: 42.83693422152606
- type: nauc_ndcg_at_5_std
value: 24.11535369089384
- type: nauc_precision_at_1000_diff1
value: -12.669594168389192
- type: nauc_precision_at_1000_max
value: 8.798164077517391
- type: nauc_precision_at_1000_std
value: 33.81862573258825
- type: nauc_precision_at_100_diff1
value: -7.005181564872601
- type: nauc_precision_at_100_max
value: 22.648723626866374
- type: nauc_precision_at_100_std
value: 43.65426389346721
- type: nauc_precision_at_10_diff1
value: 4.8405576299864945
- type: nauc_precision_at_10_max
value: 39.91286717889381
- type: nauc_precision_at_10_std
value: 35.574065561205096
- type: nauc_precision_at_1_diff1
value: 32.106085379422524
- type: nauc_precision_at_1_max
value: 47.77541655844478
- type: nauc_precision_at_1_std
value: 24.786702037536276
- type: nauc_precision_at_20_diff1
value: 0.08875655110882817
- type: nauc_precision_at_20_max
value: 34.77100967209973
- type: nauc_precision_at_20_std
value: 39.851412685464176
- type: nauc_precision_at_3_diff1
value: 16.574574215758624
- type: nauc_precision_at_3_max
value: 45.42842355154502
- type: nauc_precision_at_3_std
value: 28.31538323007723
- type: nauc_precision_at_5_diff1
value: 10.494687717697923
- type: nauc_precision_at_5_max
value: 42.0168314602896
- type: nauc_precision_at_5_std
value: 30.72486385311608
- type: nauc_recall_at_1000_diff1
value: 9.418427515050707
- type: nauc_recall_at_1000_max
value: 27.143782318814182
- type: nauc_recall_at_1000_std
value: 27.349192687153284
- type: nauc_recall_at_100_diff1
value: 16.884742295138704
- type: nauc_recall_at_100_max
value: 27.5200727845606
- type: nauc_recall_at_100_std
value: 16.76172862155474
- type: nauc_recall_at_10_diff1
value: 23.894239139033917
- type: nauc_recall_at_10_max
value: 20.19653160625137
- type: nauc_recall_at_10_std
value: -1.1818405987921334
- type: nauc_recall_at_1_diff1
value: 50.027513759193376
- type: nauc_recall_at_1_max
value: 19.429910518237936
- type: nauc_recall_at_1_std
value: -8.97104145052985
- type: nauc_recall_at_20_diff1
value: 23.687099370897887
- type: nauc_recall_at_20_max
value: 24.6629558566208
- type: nauc_recall_at_20_std
value: 5.407720319345621
- type: nauc_recall_at_3_diff1
value: 34.403660975814034
- type: nauc_recall_at_3_max
value: 20.066555724505257
- type: nauc_recall_at_3_std
value: -3.63779773997605
- type: nauc_recall_at_5_diff1
value: 27.409120048379066
- type: nauc_recall_at_5_max
value: 17.871400437143393
- type: nauc_recall_at_5_std
value: -4.490534640413254
- type: ndcg_at_1
value: 45.201
- type: ndcg_at_10
value: 37.283
- type: ndcg_at_100
value: 34.019
- type: ndcg_at_1000
value: 42.339
- type: ndcg_at_20
value: 34.827000000000005
- type: ndcg_at_3
value: 42.841
- type: ndcg_at_5
value: 40.778
- type: precision_at_1
value: 46.749
- type: precision_at_10
value: 27.771
- type: precision_at_100
value: 8.762
- type: precision_at_1000
value: 2.137
- type: precision_at_20
value: 20.759
- type: precision_at_3
value: 41.073
- type: precision_at_5
value: 35.975
- type: recall_at_1
value: 5.574
- type: recall_at_10
value: 18.631
- type: recall_at_100
value: 34.352
- type: recall_at_1000
value: 64.57000000000001
- type: recall_at_20
value: 22.359
- type: recall_at_3
value: 11.440999999999999
- type: recall_at_5
value: 14.493
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ (default)
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: main_score
value: 61.028999999999996
- type: map_at_1
value: 37.177
- type: map_at_10
value: 53.40899999999999
- type: map_at_100
value: 54.298
- type: map_at_1000
value: 54.315000000000005
- type: map_at_20
value: 54.025
- type: map_at_3
value: 49.05
- type: map_at_5
value: 51.82
- type: mrr_at_1
value: 41.59907300115875
- type: mrr_at_10
value: 55.78067235005224
- type: mrr_at_100
value: 56.41660993735389
- type: mrr_at_1000
value: 56.42754475461054
- type: mrr_at_20
value: 56.23518276066669
- type: mrr_at_3
value: 52.37543453070661
- type: mrr_at_5
value: 54.548088064889775
- type: nauc_map_at_1000_diff1
value: 37.27249375628604
- type: nauc_map_at_1000_max
value: 27.392138921419324
- type: nauc_map_at_1000_std
value: -3.5900106216193315
- type: nauc_map_at_100_diff1
value: 37.2697901014825
- type: nauc_map_at_100_max
value: 27.405921213076223
- type: nauc_map_at_100_std
value: -3.573566659351339
- type: nauc_map_at_10_diff1
value: 37.16335435590572
- type: nauc_map_at_10_max
value: 27.413006448193094
- type: nauc_map_at_10_std
value: -3.9602938844810303
- type: nauc_map_at_1_diff1
value: 40.79178035869281
- type: nauc_map_at_1_max
value: 21.840846704021168
- type: nauc_map_at_1_std
value: -6.154432706859515
- type: nauc_map_at_20_diff1
value: 37.19465980632151
- type: nauc_map_at_20_max
value: 27.472653634570786
- type: nauc_map_at_20_std
value: -3.6471752193658094
- type: nauc_map_at_3_diff1
value: 37.00050883840103
- type: nauc_map_at_3_max
value: 26.166201606832622
- type: nauc_map_at_3_std
value: -5.058745283770789
- type: nauc_map_at_5_diff1
value: 37.312001024201614
- type: nauc_map_at_5_max
value: 27.20835796415595
- type: nauc_map_at_5_std
value: -4.534370816807776
- type: nauc_mrr_at_1000_diff1
value: 37.0970736659852
- type: nauc_mrr_at_1000_max
value: 27.50593927169649
- type: nauc_mrr_at_1000_std
value: -1.4306799570196265
- type: nauc_mrr_at_100_diff1
value: 37.097509694127424
- type: nauc_mrr_at_100_max
value: 27.51661298886077
- type: nauc_mrr_at_100_std
value: -1.4199131237737803
- type: nauc_mrr_at_10_diff1
value: 36.932844699119116
- type: nauc_mrr_at_10_max
value: 27.621686914876264
- type: nauc_mrr_at_10_std
value: -1.5134823279039098
- type: nauc_mrr_at_1_diff1
value: 40.02588975690894
- type: nauc_mrr_at_1_max
value: 23.299213673927742
- type: nauc_mrr_at_1_std
value: -3.2449821682928857
- type: nauc_mrr_at_20_diff1
value: 37.03753600016832
- type: nauc_mrr_at_20_max
value: 27.595623068393866
- type: nauc_mrr_at_20_std
value: -1.420887979592882
- type: nauc_mrr_at_3_diff1
value: 36.91182898204814
- type: nauc_mrr_at_3_max
value: 27.152051504473885
- type: nauc_mrr_at_3_std
value: -1.9927562689418785
- type: nauc_mrr_at_5_diff1
value: 36.99585850355028
- type: nauc_mrr_at_5_max
value: 27.595839086884865
- type: nauc_mrr_at_5_std
value: -1.647378331798377
- type: nauc_ndcg_at_1000_diff1
value: 36.81876435190347
- type: nauc_ndcg_at_1000_max
value: 28.829624794175935
- type: nauc_ndcg_at_1000_std
value: -1.65861992216032
- type: nauc_ndcg_at_100_diff1
value: 36.78530077714473
- type: nauc_ndcg_at_100_max
value: 29.345829163429332
- type: nauc_ndcg_at_100_std
value: -0.9834660238902133
- type: nauc_ndcg_at_10_diff1
value: 36.12614493982964
- type: nauc_ndcg_at_10_max
value: 29.68306077249619
- type: nauc_ndcg_at_10_std
value: -2.2988088369038424
- type: nauc_ndcg_at_1_diff1
value: 40.02588975690894
- type: nauc_ndcg_at_1_max
value: 23.299213673927742
- type: nauc_ndcg_at_1_std
value: -3.2449821682928857
- type: nauc_ndcg_at_20_diff1
value: 36.305901085440176
- type: nauc_ndcg_at_20_max
value: 29.900293267731914
- type: nauc_ndcg_at_20_std
value: -1.299150832053996
- type: nauc_ndcg_at_3_diff1
value: 36.08231518905999
- type: nauc_ndcg_at_3_max
value: 27.551888883244995
- type: nauc_ndcg_at_3_std
value: -4.148899293368668
- type: nauc_ndcg_at_5_diff1
value: 36.46875305559966
- type: nauc_ndcg_at_5_max
value: 29.164887327209787
- type: nauc_ndcg_at_5_std
value: -3.3697390325217076
- type: nauc_precision_at_1000_diff1
value: -10.3219194845074
- type: nauc_precision_at_1000_max
value: 3.539745607347162
- type: nauc_precision_at_1000_std
value: 14.732306584403634
- type: nauc_precision_at_100_diff1
value: -6.560356132891633
- type: nauc_precision_at_100_max
value: 10.337169381451696
- type: nauc_precision_at_100_std
value: 19.20600399831645
- type: nauc_precision_at_10_diff1
value: 8.363445709346582
- type: nauc_precision_at_10_max
value: 23.63627616639036
- type: nauc_precision_at_10_std
value: 10.673622244929492
- type: nauc_precision_at_1_diff1
value: 40.02588975690894
- type: nauc_precision_at_1_max
value: 23.299213673927742
- type: nauc_precision_at_1_std
value: -3.2449821682928857
- type: nauc_precision_at_20_diff1
value: 1.4455832869975551
- type: nauc_precision_at_20_max
value: 19.98564944586283
- type: nauc_precision_at_20_std
value: 16.313152259234684
- type: nauc_precision_at_3_diff1
value: 22.401426703012387
- type: nauc_precision_at_3_max
value: 27.664284153790934
- type: nauc_precision_at_3_std
value: 2.0415835028145013
- type: nauc_precision_at_5_diff1
value: 16.858040191181527
- type: nauc_precision_at_5_max
value: 26.95159466584669
- type: nauc_precision_at_5_std
value: 5.337376948898463
- type: nauc_recall_at_1000_diff1
value: 33.419325094531246
- type: nauc_recall_at_1000_max
value: 81.65994088738964
- type: nauc_recall_at_1000_std
value: 63.36886394313217
- type: nauc_recall_at_100_diff1
value: 33.73442949813673
- type: nauc_recall_at_100_max
value: 64.50622866427926
- type: nauc_recall_at_100_std
value: 46.52235851200254
- type: nauc_recall_at_10_diff1
value: 29.788714544862056
- type: nauc_recall_at_10_max
value: 38.99828655870941
- type: nauc_recall_at_10_std
value: 1.7091690344792725
- type: nauc_recall_at_1_diff1
value: 40.79178035869281
- type: nauc_recall_at_1_max
value: 21.840846704021168
- type: nauc_recall_at_1_std
value: -6.154432706859515
- type: nauc_recall_at_20_diff1
value: 29.268077606585464
- type: nauc_recall_at_20_max
value: 46.544672010268386
- type: nauc_recall_at_20_std
value: 11.559943847242257
- type: nauc_recall_at_3_diff1
value: 32.274860688833726
- type: nauc_recall_at_3_max
value: 29.74799709828914
- type: nauc_recall_at_3_std
value: -4.408458412201667
- type: nauc_recall_at_5_diff1
value: 32.393551871375514
- type: nauc_recall_at_5_max
value: 34.33472583999946
- type: nauc_recall_at_5_std
value: -2.6839106423963486
- type: ndcg_at_1
value: 41.599000000000004
- type: ndcg_at_10
value: 61.028999999999996
- type: ndcg_at_100
value: 64.55
- type: ndcg_at_1000
value: 64.948
- type: ndcg_at_20
value: 62.971
- type: ndcg_at_3
value: 53.122
- type: ndcg_at_5
value: 57.607
- type: precision_at_1
value: 41.599000000000004
- type: precision_at_10
value: 9.754
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.346
- type: precision_at_3
value: 23.880000000000003
- type: precision_at_5
value: 16.964000000000002
- type: recall_at_1
value: 37.177
- type: recall_at_10
value: 81.658
- type: recall_at_100
value: 96.497
- type: recall_at_1000
value: 99.445
- type: recall_at_20
value: 88.75800000000001
- type: recall_at_3
value: 61.525
- type: recall_at_5
value: 71.76
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval (default)
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: dev
type: mteb/quora
metrics:
- type: main_score
value: 89.036
- type: map_at_1
value: 71.101
- type: map_at_10
value: 85.455
- type: map_at_100
value: 85.994
- type: map_at_1000
value: 86.008
- type: map_at_20
value: 85.828
- type: map_at_3
value: 82.53399999999999
- type: map_at_5
value: 84.436
- type: mrr_at_1
value: 81.86
- type: mrr_at_10
value: 88.11046031746035
- type: mrr_at_100
value: 88.19975129757977
- type: mrr_at_1000
value: 88.20025683960115
- type: mrr_at_20
value: 88.17505422553023
- type: mrr_at_3
value: 87.21666666666681
- type: mrr_at_5
value: 87.86166666666674
- type: nauc_map_at_1000_diff1
value: 76.87108519650897
- type: nauc_map_at_1000_max
value: 33.61242692238016
- type: nauc_map_at_1000_std
value: -41.17597310279849
- type: nauc_map_at_100_diff1
value: 76.87153736524259
- type: nauc_map_at_100_max
value: 33.54970297094648
- type: nauc_map_at_100_std
value: -41.25992178085852
- type: nauc_map_at_10_diff1
value: 77.09438545715085
- type: nauc_map_at_10_max
value: 33.2308328259168
- type: nauc_map_at_10_std
value: -42.899051862463516
- type: nauc_map_at_1_diff1
value: 80.4545167505852
- type: nauc_map_at_1_max
value: 23.403575293489297
- type: nauc_map_at_1_std
value: -38.73915078390272
- type: nauc_map_at_20_diff1
value: 76.94979482879727
- type: nauc_map_at_20_max
value: 33.3965542820201
- type: nauc_map_at_20_std
value: -41.86565874579091
- type: nauc_map_at_3_diff1
value: 77.49566624548056
- type: nauc_map_at_3_max
value: 31.780987466527982
- type: nauc_map_at_3_std
value: -44.21854519305753
- type: nauc_map_at_5_diff1
value: 77.42771789228605
- type: nauc_map_at_5_max
value: 32.68020733774396
- type: nauc_map_at_5_std
value: -44.02529373114044
- type: nauc_mrr_at_1000_diff1
value: 77.2505984468272
- type: nauc_mrr_at_1000_max
value: 35.55233116927507
- type: nauc_mrr_at_1000_std
value: -36.53616122640594
- type: nauc_mrr_at_100_diff1
value: 77.2505647746378
- type: nauc_mrr_at_100_max
value: 35.55185874722589
- type: nauc_mrr_at_100_std
value: -36.536878149072706
- type: nauc_mrr_at_10_diff1
value: 77.28454775401565
- type: nauc_mrr_at_10_max
value: 35.66029990876809
- type: nauc_mrr_at_10_std
value: -36.59040430274804
- type: nauc_mrr_at_1_diff1
value: 77.78026873953571
- type: nauc_mrr_at_1_max
value: 34.24444208714401
- type: nauc_mrr_at_1_std
value: -35.78176040034259
- type: nauc_mrr_at_20_diff1
value: 77.26647675316424
- type: nauc_mrr_at_20_max
value: 35.55846836956988
- type: nauc_mrr_at_20_std
value: -36.573881740702944
- type: nauc_mrr_at_3_diff1
value: 76.97249605916133
- type: nauc_mrr_at_3_max
value: 35.75239213026302
- type: nauc_mrr_at_3_std
value: -36.66948654144912
- type: nauc_mrr_at_5_diff1
value: 77.23448498990302
- type: nauc_mrr_at_5_max
value: 35.66032506714416
- type: nauc_mrr_at_5_std
value: -36.38867782403099
- type: nauc_ndcg_at_1000_diff1
value: 76.78192029636689
- type: nauc_ndcg_at_1000_max
value: 34.838983961231115
- type: nauc_ndcg_at_1000_std
value: -38.7139917221289
- type: nauc_ndcg_at_100_diff1
value: 76.74994017852701
- type: nauc_ndcg_at_100_max
value: 34.5562459567844
- type: nauc_ndcg_at_100_std
value: -39.1159390113717
- type: nauc_ndcg_at_10_diff1
value: 77.03700409583301
- type: nauc_ndcg_at_10_max
value: 34.49775612114203
- type: nauc_ndcg_at_10_std
value: -42.03003149796472
- type: nauc_ndcg_at_1_diff1
value: 77.81816314669393
- type: nauc_ndcg_at_1_max
value: 34.07485459082228
- type: nauc_ndcg_at_1_std
value: -35.94895056306454
- type: nauc_ndcg_at_20_diff1
value: 76.96510332497088
- type: nauc_ndcg_at_20_max
value: 34.450082024564146
- type: nauc_ndcg_at_20_std
value: -40.63314555768711
- type: nauc_ndcg_at_3_diff1
value: 76.151643391554
- type: nauc_ndcg_at_3_max
value: 34.66383376117758
- type: nauc_ndcg_at_3_std
value: -41.39392660300224
- type: nauc_ndcg_at_5_diff1
value: 76.92278503649814
- type: nauc_ndcg_at_5_max
value: 34.35931928202013
- type: nauc_ndcg_at_5_std
value: -42.28302402211198
- type: nauc_precision_at_1000_diff1
value: -44.32392932408826
- type: nauc_precision_at_1000_max
value: -1.5976203820441983
- type: nauc_precision_at_1000_std
value: 38.70649763774179
- type: nauc_precision_at_100_diff1
value: -44.12260005400485
- type: nauc_precision_at_100_max
value: -3.0647204564936312
- type: nauc_precision_at_100_std
value: 36.21137758417562
- type: nauc_precision_at_10_diff1
value: -38.874503464270056
- type: nauc_precision_at_10_max
value: -0.7995397378969676
- type: nauc_precision_at_10_std
value: 25.08941543528278
- type: nauc_precision_at_1_diff1
value: 77.81816314669393
- type: nauc_precision_at_1_max
value: 34.07485459082228
- type: nauc_precision_at_1_std
value: -35.94895056306454
- type: nauc_precision_at_20_diff1
value: -41.93097475974228
- type: nauc_precision_at_20_max
value: -2.691181750976814
- type: nauc_precision_at_20_std
value: 30.655007568557085
- type: nauc_precision_at_3_diff1
value: -21.109490315436517
- type: nauc_precision_at_3_max
value: 9.49736775358964
- type: nauc_precision_at_3_std
value: 9.195033684093397
- type: nauc_precision_at_5_diff1
value: -32.49764534227595
- type: nauc_precision_at_5_max
value: 3.0490365273648803
- type: nauc_precision_at_5_std
value: 18.119935851058468
- type: nauc_recall_at_1000_diff1
value: 75.62341631050762
- type: nauc_recall_at_1000_max
value: 83.86481603169511
- type: nauc_recall_at_1000_std
value: 58.55405944964621
- type: nauc_recall_at_100_diff1
value: 65.95496827539912
- type: nauc_recall_at_100_max
value: 14.97452268550046
- type: nauc_recall_at_100_std
value: -62.18680465170524
- type: nauc_recall_at_10_diff1
value: 75.08434366486102
- type: nauc_recall_at_10_max
value: 32.852276917018116
- type: nauc_recall_at_10_std
value: -62.12970511272648
- type: nauc_recall_at_1_diff1
value: 80.4545167505852
- type: nauc_recall_at_1_max
value: 23.403575293489297
- type: nauc_recall_at_1_std
value: -38.73915078390272
- type: nauc_recall_at_20_diff1
value: 75.66480840772607
- type: nauc_recall_at_20_max
value: 31.230359729601208
- type: nauc_recall_at_20_std
value: -64.11261226121559
- type: nauc_recall_at_3_diff1
value: 73.81582560951404
- type: nauc_recall_at_3_max
value: 31.052473048456708
- type: nauc_recall_at_3_std
value: -49.45567344158681
- type: nauc_recall_at_5_diff1
value: 74.06384098137175
- type: nauc_recall_at_5_max
value: 31.48187742884454
- type: nauc_recall_at_5_std
value: -53.45142194227105
- type: ndcg_at_1
value: 81.84
- type: ndcg_at_10
value: 89.036
- type: ndcg_at_100
value: 90.08800000000001
- type: ndcg_at_1000
value: 90.171
- type: ndcg_at_20
value: 89.632
- type: ndcg_at_3
value: 86.39
- type: ndcg_at_5
value: 87.943
- type: precision_at_1
value: 81.84
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.49
- type: precision_at_1000
value: 0.152
- type: precision_at_20
value: 7.0760000000000005
- type: precision_at_3
value: 38.027
- type: precision_at_5
value: 24.951999999999998
- type: recall_at_1
value: 71.101
- type: recall_at_10
value: 96.071
- type: recall_at_100
value: 99.641
- type: recall_at_1000
value: 99.98700000000001
- type: recall_at_20
value: 97.961
- type: recall_at_3
value: 88.436
- type: recall_at_5
value: 92.898
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval (default)
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: main_score
value: 89.208
- type: map_at_1
value: 71.635
- type: map_at_10
value: 85.625
- type: map_at_100
value: 86.236
- type: map_at_1000
value: 86.251
- type: map_at_20
value: 86.036
- type: map_at_3
value: 82.664
- type: map_at_5
value: 84.588
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.43901190476178
- type: mrr_at_100
value: 88.52632666726963
- type: mrr_at_1000
value: 88.52691231190065
- type: mrr_at_20
value: 88.5086530013243
- type: mrr_at_3
value: 87.52666666666644
- type: mrr_at_5
value: 88.16716666666639
- type: nauc_map_at_1000_diff1
value: 76.69308460928899
- type: nauc_map_at_1000_max
value: 35.4676191405908
- type: nauc_map_at_1000_std
value: -42.45246342350121
- type: nauc_map_at_100_diff1
value: 76.69724007993696
- type: nauc_map_at_100_max
value: 35.44406733319827
- type: nauc_map_at_100_std
value: -42.503413138162486
- type: nauc_map_at_10_diff1
value: 76.91685742813964
- type: nauc_map_at_10_max
value: 35.02153657433807
- type: nauc_map_at_10_std
value: -44.367365466570426
- type: nauc_map_at_1_diff1
value: 80.55801255675962
- type: nauc_map_at_1_max
value: 27.058161138340527
- type: nauc_map_at_1_std
value: -39.4963211510531
- type: nauc_map_at_20_diff1
value: 76.76447537369087
- type: nauc_map_at_20_max
value: 35.32040158644433
- type: nauc_map_at_20_std
value: -43.21303554960284
- type: nauc_map_at_3_diff1
value: 77.40499840514137
- type: nauc_map_at_3_max
value: 33.10906358569285
- type: nauc_map_at_3_std
value: -46.04737347284554
- type: nauc_map_at_5_diff1
value: 77.15728738532938
- type: nauc_map_at_5_max
value: 34.33464314840439
- type: nauc_map_at_5_std
value: -45.89958892369562
- type: nauc_mrr_at_1000_diff1
value: 77.31291439145946
- type: nauc_mrr_at_1000_max
value: 37.230887514872805
- type: nauc_mrr_at_1000_std
value: -39.38330115067387
- type: nauc_mrr_at_100_diff1
value: 77.31258475265957
- type: nauc_mrr_at_100_max
value: 37.2318332422385
- type: nauc_mrr_at_100_std
value: -39.38278945609743
- type: nauc_mrr_at_10_diff1
value: 77.27217320343534
- type: nauc_mrr_at_10_max
value: 37.26080710249818
- type: nauc_mrr_at_10_std
value: -39.5294415983385
- type: nauc_mrr_at_1_diff1
value: 78.23833876100495
- type: nauc_mrr_at_1_max
value: 36.656764402278775
- type: nauc_mrr_at_1_std
value: -37.255149721562184
- type: nauc_mrr_at_20_diff1
value: 77.30440129198894
- type: nauc_mrr_at_20_max
value: 37.24212487079394
- type: nauc_mrr_at_20_std
value: -39.40823051440391
- type: nauc_mrr_at_3_diff1
value: 77.0650697336263
- type: nauc_mrr_at_3_max
value: 37.338365680984595
- type: nauc_mrr_at_3_std
value: -39.61465396146359
- type: nauc_mrr_at_5_diff1
value: 77.23689991901227
- type: nauc_mrr_at_5_max
value: 37.402095366186515
- type: nauc_mrr_at_5_std
value: -39.81000570358434
- type: nauc_ndcg_at_1000_diff1
value: 76.52492111059385
- type: nauc_ndcg_at_1000_max
value: 36.4917030050163
- type: nauc_ndcg_at_1000_std
value: -40.57405843022022
- type: nauc_ndcg_at_100_diff1
value: 76.52885222990776
- type: nauc_ndcg_at_100_max
value: 36.459002270403104
- type: nauc_ndcg_at_100_std
value: -40.700799028706136
- type: nauc_ndcg_at_10_diff1
value: 76.47989448348181
- type: nauc_ndcg_at_10_max
value: 36.07571701542727
- type: nauc_ndcg_at_10_std
value: -43.68216832570433
- type: nauc_ndcg_at_1_diff1
value: 78.21904562929713
- type: nauc_ndcg_at_1_max
value: 36.68800580256306
- type: nauc_ndcg_at_1_std
value: -37.1106119214964
- type: nauc_ndcg_at_20_diff1
value: 76.51018855356082
- type: nauc_ndcg_at_20_max
value: 36.25847353699082
- type: nauc_ndcg_at_20_std
value: -42.26728405297162
- type: nauc_ndcg_at_3_diff1
value: 75.98751306811951
- type: nauc_ndcg_at_3_max
value: 35.53532168839834
- type: nauc_ndcg_at_3_std
value: -43.22027231551964
- type: nauc_ndcg_at_5_diff1
value: 76.41353684969529
- type: nauc_ndcg_at_5_max
value: 35.84158818150277
- type: nauc_ndcg_at_5_std
value: -44.678250163660735
- type: nauc_precision_at_1000_diff1
value: -44.547524496944504
- type: nauc_precision_at_1000_max
value: -7.017755716303293
- type: nauc_precision_at_1000_std
value: 37.81857144040679
- type: nauc_precision_at_100_diff1
value: -44.2990697671559
- type: nauc_precision_at_100_max
value: -7.090370898560614
- type: nauc_precision_at_100_std
value: 36.74158403150684
- type: nauc_precision_at_10_diff1
value: -39.80812285102048
- type: nauc_precision_at_10_max
value: -3.2239932083528116
- type: nauc_precision_at_10_std
value: 26.540899746112927
- type: nauc_precision_at_1_diff1
value: 78.21904562929713
- type: nauc_precision_at_1_max
value: 36.68800580256306
- type: nauc_precision_at_1_std
value: -37.1106119214964
- type: nauc_precision_at_20_diff1
value: -42.72592324685673
- type: nauc_precision_at_20_max
value: -5.3434665602492455
- type: nauc_precision_at_20_std
value: 32.0763404810473
- type: nauc_precision_at_3_diff1
value: -20.448213979815964
- type: nauc_precision_at_3_max
value: 6.48540224514135
- type: nauc_precision_at_3_std
value: 7.144269812256157
- type: nauc_precision_at_5_diff1
value: -32.73748400918877
- type: nauc_precision_at_5_max
value: 0.5351204546857261
- type: nauc_precision_at_5_std
value: 17.21939760056977
- type: nauc_recall_at_1000_diff1
value: 54.36176817603542
- type: nauc_recall_at_1000_max
value: 8.42245797354225
- type: nauc_recall_at_1000_std
value: 20.82920230407764
- type: nauc_recall_at_100_diff1
value: 70.75825465627794
- type: nauc_recall_at_100_max
value: 40.02545502828442
- type: nauc_recall_at_100_std
value: -29.381365717773434
- type: nauc_recall_at_10_diff1
value: 71.99814968277674
- type: nauc_recall_at_10_max
value: 33.07283139289303
- type: nauc_recall_at_10_std
value: -61.868754150647
- type: nauc_recall_at_1_diff1
value: 80.55801255675962
- type: nauc_recall_at_1_max
value: 27.058161138340527
- type: nauc_recall_at_1_std
value: -39.4963211510531
- type: nauc_recall_at_20_diff1
value: 72.20770471431179
- type: nauc_recall_at_20_max
value: 34.27388608815473
- type: nauc_recall_at_20_std
value: -57.02562075619354
- type: nauc_recall_at_3_diff1
value: 73.33228189075119
- type: nauc_recall_at_3_max
value: 31.031018188701548
- type: nauc_recall_at_3_std
value: -51.71143501327714
- type: nauc_recall_at_5_diff1
value: 72.23242137345602
- type: nauc_recall_at_5_max
value: 32.306978089143975
- type: nauc_recall_at_5_std
value: -58.18075857337518
- type: ndcg_at_1
value: 82.43
- type: ndcg_at_10
value: 89.208
- type: ndcg_at_100
value: 90.312
- type: ndcg_at_1000
value: 90.39500000000001
- type: ndcg_at_20
value: 89.822
- type: ndcg_at_3
value: 86.443
- type: ndcg_at_5
value: 88.051
- type: precision_at_1
value: 82.43
- type: precision_at_10
value: 13.513
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.158
- type: precision_at_3
value: 37.753
- type: precision_at_5
value: 24.886
- type: recall_at_1
value: 71.635
- type: recall_at_10
value: 95.967
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.98599999999999
- type: recall_at_20
value: 97.897
- type: recall_at_3
value: 88.036
- type: recall_at_5
value: 92.551
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS (default)
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: main_score
value: 22.585
- type: map_at_1
value: 5.267
- type: map_at_10
value: 13.682
- type: map_at_100
value: 15.821
- type: map_at_1000
value: 16.155
- type: map_at_20
value: 14.776
- type: map_at_3
value: 9.447999999999999
- type: map_at_5
value: 11.537
- type: mrr_at_1
value: 25.900000000000002
- type: mrr_at_10
value: 37.2399206349206
- type: mrr_at_100
value: 38.27279652206334
- type: mrr_at_1000
value: 38.32018340983372
- type: mrr_at_20
value: 37.88470320013656
- type: mrr_at_3
value: 33.70000000000001
- type: mrr_at_5
value: 35.929999999999964
- type: nauc_map_at_1000_diff1
value: 15.010512584883928
- type: nauc_map_at_1000_max
value: 28.131592592280125
- type: nauc_map_at_1000_std
value: 18.23227227598505
- type: nauc_map_at_100_diff1
value: 15.038422438580948
- type: nauc_map_at_100_max
value: 28.118579098188683
- type: nauc_map_at_100_std
value: 18.102627506796637
- type: nauc_map_at_10_diff1
value: 15.2281617921156
- type: nauc_map_at_10_max
value: 26.358609940161813
- type: nauc_map_at_10_std
value: 14.028442329121555
- type: nauc_map_at_1_diff1
value: 19.804944135000376
- type: nauc_map_at_1_max
value: 20.639841719764735
- type: nauc_map_at_1_std
value: 8.423093067457737
- type: nauc_map_at_20_diff1
value: 15.2511720546573
- type: nauc_map_at_20_max
value: 27.7290112272419
- type: nauc_map_at_20_std
value: 16.279489028653636
- type: nauc_map_at_3_diff1
value: 18.969154716718396
- type: nauc_map_at_3_max
value: 25.211069495284065
- type: nauc_map_at_3_std
value: 8.183585306093075
- type: nauc_map_at_5_diff1
value: 16.995226268024048
- type: nauc_map_at_5_max
value: 26.05551249234277
- type: nauc_map_at_5_std
value: 10.672250037070603
- type: nauc_mrr_at_1000_diff1
value: 18.900489928879864
- type: nauc_mrr_at_1000_max
value: 24.818364671912125
- type: nauc_mrr_at_1000_std
value: 13.55809626059453
- type: nauc_mrr_at_100_diff1
value: 18.885312642782274
- type: nauc_mrr_at_100_max
value: 24.815818576928283
- type: nauc_mrr_at_100_std
value: 13.59041082400011
- type: nauc_mrr_at_10_diff1
value: 18.840497849547965
- type: nauc_mrr_at_10_max
value: 24.508418448385445
- type: nauc_mrr_at_10_std
value: 13.24104462801846
- type: nauc_mrr_at_1_diff1
value: 19.939676779904232
- type: nauc_mrr_at_1_max
value: 20.867982502501388
- type: nauc_mrr_at_1_std
value: 8.654485218204698
- type: nauc_mrr_at_20_diff1
value: 18.75686501314611
- type: nauc_mrr_at_20_max
value: 24.764731653376685
- type: nauc_mrr_at_20_std
value: 13.593035396029709
- type: nauc_mrr_at_3_diff1
value: 19.762798012479887
- type: nauc_mrr_at_3_max
value: 24.851437035247397
- type: nauc_mrr_at_3_std
value: 11.616646922331773
- type: nauc_mrr_at_5_diff1
value: 19.48751619117306
- type: nauc_mrr_at_5_max
value: 25.02565432972893
- type: nauc_mrr_at_5_std
value: 13.096726015560694
- type: nauc_ndcg_at_1000_diff1
value: 14.421194341988578
- type: nauc_ndcg_at_1000_max
value: 29.46627137066849
- type: nauc_ndcg_at_1000_std
value: 25.294914478704282
- type: nauc_ndcg_at_100_diff1
value: 14.188910253634393
- type: nauc_ndcg_at_100_max
value: 29.675945969703676
- type: nauc_ndcg_at_100_std
value: 25.152541930218398
- type: nauc_ndcg_at_10_diff1
value: 14.950700299876996
- type: nauc_ndcg_at_10_max
value: 26.552125339735355
- type: nauc_ndcg_at_10_std
value: 16.423237887520827
- type: nauc_ndcg_at_1_diff1
value: 19.939676779904232
- type: nauc_ndcg_at_1_max
value: 20.867982502501388
- type: nauc_ndcg_at_1_std
value: 8.654485218204698
- type: nauc_ndcg_at_20_diff1
value: 14.646062844584721
- type: nauc_ndcg_at_20_max
value: 29.019613358216105
- type: nauc_ndcg_at_20_std
value: 20.258510159436103
- type: nauc_ndcg_at_3_diff1
value: 19.14228516186438
- type: nauc_ndcg_at_3_max
value: 25.884698532628796
- type: nauc_ndcg_at_3_std
value: 10.082340457184428
- type: nauc_ndcg_at_5_diff1
value: 17.648427955677832
- type: nauc_ndcg_at_5_max
value: 26.960002111496234
- type: nauc_ndcg_at_5_std
value: 13.165986859638604
- type: nauc_precision_at_1000_diff1
value: 3.837505819613137
- type: nauc_precision_at_1000_max
value: 22.085273204384773
- type: nauc_precision_at_1000_std
value: 37.749767215473746
- type: nauc_precision_at_100_diff1
value: 6.0618779651125525
- type: nauc_precision_at_100_max
value: 26.55293689015515
- type: nauc_precision_at_100_std
value: 35.92840742685366
- type: nauc_precision_at_10_diff1
value: 9.609219002496197
- type: nauc_precision_at_10_max
value: 24.7210313158673
- type: nauc_precision_at_10_std
value: 19.688687883244082
- type: nauc_precision_at_1_diff1
value: 19.939676779904232
- type: nauc_precision_at_1_max
value: 20.867982502501388
- type: nauc_precision_at_1_std
value: 8.654485218204698
- type: nauc_precision_at_20_diff1
value: 8.491039455217111
- type: nauc_precision_at_20_max
value: 28.41137144178967
- type: nauc_precision_at_20_std
value: 26.3995307896142
- type: nauc_precision_at_3_diff1
value: 18.574797308038786
- type: nauc_precision_at_3_max
value: 27.317203178234887
- type: nauc_precision_at_3_std
value: 10.752025361042627
- type: nauc_precision_at_5_diff1
value: 15.19646090790648
- type: nauc_precision_at_5_max
value: 27.46968680886624
- type: nauc_precision_at_5_std
value: 15.291114444897175
- type: nauc_recall_at_1000_diff1
value: 3.8560988027864984
- type: nauc_recall_at_1000_max
value: 21.962689956944313
- type: nauc_recall_at_1000_std
value: 39.54218946626981
- type: nauc_recall_at_100_diff1
value: 6.027047924475086
- type: nauc_recall_at_100_max
value: 26.199898112709867
- type: nauc_recall_at_100_std
value: 36.2830620090185
- type: nauc_recall_at_10_diff1
value: 9.535572267531073
- type: nauc_recall_at_10_max
value: 24.611837567240595
- type: nauc_recall_at_10_std
value: 19.643464138242795
- type: nauc_recall_at_1_diff1
value: 19.804944135000376
- type: nauc_recall_at_1_max
value: 20.639841719764735
- type: nauc_recall_at_1_std
value: 8.423093067457737
- type: nauc_recall_at_20_diff1
value: 8.380441122318603
- type: nauc_recall_at_20_max
value: 28.304675323191418
- type: nauc_recall_at_20_std
value: 26.478505583494798
- type: nauc_recall_at_3_diff1
value: 18.589842650254056
- type: nauc_recall_at_3_max
value: 27.267022468432433
- type: nauc_recall_at_3_std
value: 10.489972416983772
- type: nauc_recall_at_5_diff1
value: 14.991522037739355
- type: nauc_recall_at_5_max
value: 27.171074789756666
- type: nauc_recall_at_5_std
value: 15.06566087881635
- type: ndcg_at_1
value: 25.900000000000002
- type: ndcg_at_10
value: 22.585
- type: ndcg_at_100
value: 30.666
- type: ndcg_at_1000
value: 36.356
- type: ndcg_at_20
value: 25.469
- type: ndcg_at_3
value: 20.892
- type: ndcg_at_5
value: 18.617
- type: precision_at_1
value: 25.900000000000002
- type: precision_at_10
value: 11.84
- type: precision_at_100
value: 2.3539999999999996
- type: precision_at_1000
value: 0.372
- type: precision_at_20
value: 7.595000000000001
- type: precision_at_3
value: 19.467000000000002
- type: precision_at_5
value: 16.5
- type: recall_at_1
value: 5.267
- type: recall_at_10
value: 24.023
- type: recall_at_100
value: 47.825
- type: recall_at_1000
value: 75.613
- type: recall_at_20
value: 30.814999999999998
- type: recall_at_3
value: 11.831999999999999
- type: recall_at_5
value: 16.742
task:
type: Retrieval
- dataset:
config: default
name: MTEB SciFact (default)
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: main_score
value: 73.095
- type: map_at_1
value: 58.760999999999996
- type: map_at_10
value: 68.645
- type: map_at_100
value: 69.273
- type: map_at_1000
value: 69.28999999999999
- type: map_at_20
value: 69.148
- type: map_at_3
value: 65.93
- type: map_at_5
value: 67.227
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 69.9334656084656
- type: mrr_at_100
value: 70.4425638039262
- type: mrr_at_1000
value: 70.4592383022689
- type: mrr_at_20
value: 70.3430039931975
- type: mrr_at_3
value: 67.94444444444444
- type: mrr_at_5
value: 68.9111111111111
- type: nauc_map_at_1000_diff1
value: 73.89926164336681
- type: nauc_map_at_1000_max
value: 58.520107712601245
- type: nauc_map_at_1000_std
value: 6.203966518670752
- type: nauc_map_at_100_diff1
value: 73.88266895863376
- type: nauc_map_at_100_max
value: 58.52869559413426
- type: nauc_map_at_100_std
value: 6.2094530706982605
- type: nauc_map_at_10_diff1
value: 73.83454676041971
- type: nauc_map_at_10_max
value: 58.728632474849476
- type: nauc_map_at_10_std
value: 6.161321625117715
- type: nauc_map_at_1_diff1
value: 75.8262967666803
- type: nauc_map_at_1_max
value: 50.75430912296499
- type: nauc_map_at_1_std
value: -3.611304329879618
- type: nauc_map_at_20_diff1
value: 73.7570380099859
- type: nauc_map_at_20_max
value: 58.579878823697186
- type: nauc_map_at_20_std
value: 6.331471307882834
- type: nauc_map_at_3_diff1
value: 73.8670063410728
- type: nauc_map_at_3_max
value: 56.097293037109296
- type: nauc_map_at_3_std
value: 3.118147916941721
- type: nauc_map_at_5_diff1
value: 73.85961347670359
- type: nauc_map_at_5_max
value: 56.73699214051663
- type: nauc_map_at_5_std
value: 4.106265483441233
- type: nauc_mrr_at_1000_diff1
value: 74.43827928989487
- type: nauc_mrr_at_1000_max
value: 60.4918184019879
- type: nauc_mrr_at_1000_std
value: 8.2550027653635
- type: nauc_mrr_at_100_diff1
value: 74.42093690901741
- type: nauc_mrr_at_100_max
value: 60.499273965963
- type: nauc_mrr_at_100_std
value: 8.259231345026938
- type: nauc_mrr_at_10_diff1
value: 74.35347564500812
- type: nauc_mrr_at_10_max
value: 60.84757750349501
- type: nauc_mrr_at_10_std
value: 8.661941517184076
- type: nauc_mrr_at_1_diff1
value: 76.705227209796
- type: nauc_mrr_at_1_max
value: 57.32137546277776
- type: nauc_mrr_at_1_std
value: 4.129875191007982
- type: nauc_mrr_at_20_diff1
value: 74.30079205050251
- type: nauc_mrr_at_20_max
value: 60.53532363656904
- type: nauc_mrr_at_20_std
value: 8.32956272621327
- type: nauc_mrr_at_3_diff1
value: 74.87770487889848
- type: nauc_mrr_at_3_max
value: 60.084677423267784
- type: nauc_mrr_at_3_std
value: 7.3354753376762964
- type: nauc_mrr_at_5_diff1
value: 74.40302787656852
- type: nauc_mrr_at_5_max
value: 60.069030786945795
- type: nauc_mrr_at_5_std
value: 7.9515339665590075
- type: nauc_ndcg_at_1000_diff1
value: 73.66774503145189
- type: nauc_ndcg_at_1000_max
value: 60.51016113928767
- type: nauc_ndcg_at_1000_std
value: 8.65619371919538
- type: nauc_ndcg_at_100_diff1
value: 73.31381886910967
- type: nauc_ndcg_at_100_max
value: 60.804013515995535
- type: nauc_ndcg_at_100_std
value: 8.968020348251471
- type: nauc_ndcg_at_10_diff1
value: 72.99733432767304
- type: nauc_ndcg_at_10_max
value: 62.116824264281135
- type: nauc_ndcg_at_10_std
value: 9.809485757709925
- type: nauc_ndcg_at_1_diff1
value: 76.705227209796
- type: nauc_ndcg_at_1_max
value: 57.32137546277776
- type: nauc_ndcg_at_1_std
value: 4.129875191007982
- type: nauc_ndcg_at_20_diff1
value: 72.52123153995032
- type: nauc_ndcg_at_20_max
value: 61.27934142158071
- type: nauc_ndcg_at_20_std
value: 9.86085851593245
- type: nauc_ndcg_at_3_diff1
value: 73.29758270502096
- type: nauc_ndcg_at_3_max
value: 59.004555912521774
- type: nauc_ndcg_at_3_std
value: 6.372325905257958
- type: nauc_ndcg_at_5_diff1
value: 72.98853570048864
- type: nauc_ndcg_at_5_max
value: 58.64946586595039
- type: nauc_ndcg_at_5_std
value: 6.492229141399973
- type: nauc_precision_at_1000_diff1
value: -18.039255567985364
- type: nauc_precision_at_1000_max
value: 20.62036001220385
- type: nauc_precision_at_1000_std
value: 48.84436760568162
- type: nauc_precision_at_100_diff1
value: -7.274183459314691
- type: nauc_precision_at_100_max
value: 27.97079336127723
- type: nauc_precision_at_100_std
value: 45.54563683450541
- type: nauc_precision_at_10_diff1
value: 18.09725433020935
- type: nauc_precision_at_10_max
value: 49.11398598954457
- type: nauc_precision_at_10_std
value: 43.237184128141266
- type: nauc_precision_at_1_diff1
value: 76.705227209796
- type: nauc_precision_at_1_max
value: 57.32137546277776
- type: nauc_precision_at_1_std
value: 4.129875191007982
- type: nauc_precision_at_20_diff1
value: 1.3410525627186838
- type: nauc_precision_at_20_max
value: 37.35867159476222
- type: nauc_precision_at_20_std
value: 48.245728802102036
- type: nauc_precision_at_3_diff1
value: 46.28921347186669
- type: nauc_precision_at_3_max
value: 55.29086984891835
- type: nauc_precision_at_3_std
value: 25.485619635597068
- type: nauc_precision_at_5_diff1
value: 36.10414877829668
- type: nauc_precision_at_5_max
value: 50.74423891086506
- type: nauc_precision_at_5_std
value: 29.633563462559685
- type: nauc_recall_at_1000_diff1
value: 100.0
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 55.4154995331476
- type: nauc_recall_at_100_diff1
value: 63.437597261126946
- type: nauc_recall_at_100_max
value: 76.15157173980718
- type: nauc_recall_at_100_std
value: 27.439309056956162
- type: nauc_recall_at_10_diff1
value: 66.76520922141613
- type: nauc_recall_at_10_max
value: 74.88986784140963
- type: nauc_recall_at_10_std
value: 22.76893323200783
- type: nauc_recall_at_1_diff1
value: 75.8262967666803
- type: nauc_recall_at_1_max
value: 50.75430912296499
- type: nauc_recall_at_1_std
value: -3.611304329879618
- type: nauc_recall_at_20_diff1
value: 57.56881264902657
- type: nauc_recall_at_20_max
value: 74.94173978131198
- type: nauc_recall_at_20_std
value: 30.5661658602836
- type: nauc_recall_at_3_diff1
value: 69.47119910780243
- type: nauc_recall_at_3_max
value: 59.27944653429989
- type: nauc_recall_at_3_std
value: 6.2814183903482546
- type: nauc_recall_at_5_diff1
value: 68.10420927979328
- type: nauc_recall_at_5_max
value: 60.164296893761815
- type: nauc_recall_at_5_std
value: 9.5025037567499
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 73.095
- type: ndcg_at_100
value: 75.57199999999999
- type: ndcg_at_1000
value: 76.03
- type: ndcg_at_20
value: 74.785
- type: ndcg_at_3
value: 68.527
- type: ndcg_at_5
value: 70.333
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.2170000000000005
- type: precision_at_3
value: 26.667
- type: precision_at_5
value: 17.267
- type: recall_at_1
value: 58.760999999999996
- type: recall_at_10
value: 85.422
- type: recall_at_100
value: 96.0
- type: recall_at_1000
value: 99.667
- type: recall_at_20
value: 91.93299999999999
- type: recall_at_3
value: 72.906
- type: recall_at_5
value: 77.694
task:
type: Retrieval
- dataset:
config: default
name: MTEB SciFact (default)
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: train
type: mteb/scifact
metrics:
- type: main_score
value: 76.527
- type: map_at_1
value: 62.159
- type: map_at_10
value: 72.298
- type: map_at_100
value: 72.789
- type: map_at_1000
value: 72.80499999999999
- type: map_at_20
value: 72.658
- type: map_at_3
value: 69.697
- type: map_at_5
value: 71.405
- type: mrr_at_1
value: 65.01854140914709
- type: mrr_at_10
value: 73.3364235681912
- type: mrr_at_100
value: 73.69023773006475
- type: mrr_at_1000
value: 73.70379275258956
- type: mrr_at_20
value: 73.58899784126623
- type: mrr_at_3
value: 71.63164400494436
- type: mrr_at_5
value: 72.6266996291718
- type: nauc_map_at_1000_diff1
value: 72.26196805521474
- type: nauc_map_at_1000_max
value: 54.82473601925078
- type: nauc_map_at_1000_std
value: 7.532896905808398
- type: nauc_map_at_100_diff1
value: 72.26762601665212
- type: nauc_map_at_100_max
value: 54.84436183081319
- type: nauc_map_at_100_std
value: 7.553915623782155
- type: nauc_map_at_10_diff1
value: 72.09152947041464
- type: nauc_map_at_10_max
value: 54.566662723409344
- type: nauc_map_at_10_std
value: 6.8617531224659984
- type: nauc_map_at_1_diff1
value: 76.44362554275227
- type: nauc_map_at_1_max
value: 47.92837030943323
- type: nauc_map_at_1_std
value: 1.2712665978711795
- type: nauc_map_at_20_diff1
value: 72.1932546895839
- type: nauc_map_at_20_max
value: 54.77868328671626
- type: nauc_map_at_20_std
value: 7.5390256852193085
- type: nauc_map_at_3_diff1
value: 72.32463213490826
- type: nauc_map_at_3_max
value: 51.82850176376716
- type: nauc_map_at_3_std
value: 3.313691247008456
- type: nauc_map_at_5_diff1
value: 72.07694535940702
- type: nauc_map_at_5_max
value: 53.746544557259725
- type: nauc_map_at_5_std
value: 5.460765188941276
- type: nauc_mrr_at_1000_diff1
value: 71.91364820971862
- type: nauc_mrr_at_1000_max
value: 55.999150811401144
- type: nauc_mrr_at_1000_std
value: 10.398705225694902
- type: nauc_mrr_at_100_diff1
value: 71.9166900352723
- type: nauc_mrr_at_100_max
value: 56.0158980617252
- type: nauc_mrr_at_100_std
value: 10.416397031952592
- type: nauc_mrr_at_10_diff1
value: 71.6000299472608
- type: nauc_mrr_at_10_max
value: 55.91890883710817
- type: nauc_mrr_at_10_std
value: 10.291906323764916
- type: nauc_mrr_at_1_diff1
value: 76.49718519036318
- type: nauc_mrr_at_1_max
value: 54.12604217431032
- type: nauc_mrr_at_1_std
value: 8.333140302649584
- type: nauc_mrr_at_20_diff1
value: 71.83180901219741
- type: nauc_mrr_at_20_max
value: 55.95516059386792
- type: nauc_mrr_at_20_std
value: 10.410595110736114
- type: nauc_mrr_at_3_diff1
value: 71.41066101878594
- type: nauc_mrr_at_3_max
value: 56.33030426786812
- type: nauc_mrr_at_3_std
value: 9.807092627499873
- type: nauc_mrr_at_5_diff1
value: 71.48457263107547
- type: nauc_mrr_at_5_max
value: 55.79523079804451
- type: nauc_mrr_at_5_std
value: 9.56339540662926
- type: nauc_ndcg_at_1000_diff1
value: 71.00844332582724
- type: nauc_ndcg_at_1000_max
value: 56.0830968411215
- type: nauc_ndcg_at_1000_std
value: 10.12536414515097
- type: nauc_ndcg_at_100_diff1
value: 71.08255901217294
- type: nauc_ndcg_at_100_max
value: 56.58354344196779
- type: nauc_ndcg_at_100_std
value: 10.788436869510683
- type: nauc_ndcg_at_10_diff1
value: 70.0351612983415
- type: nauc_ndcg_at_10_max
value: 55.69237259785501
- type: nauc_ndcg_at_10_std
value: 9.098137226872005
- type: nauc_ndcg_at_1_diff1
value: 76.49718519036318
- type: nauc_ndcg_at_1_max
value: 54.12604217431032
- type: nauc_ndcg_at_1_std
value: 8.333140302649584
- type: nauc_ndcg_at_20_diff1
value: 70.55288229160162
- type: nauc_ndcg_at_20_max
value: 56.02912372617168
- type: nauc_ndcg_at_20_std
value: 10.658004918812695
- type: nauc_ndcg_at_3_diff1
value: 70.05425859113052
- type: nauc_ndcg_at_3_max
value: 53.60471853426119
- type: nauc_ndcg_at_3_std
value: 5.230816816865092
- type: nauc_ndcg_at_5_diff1
value: 69.93016148017965
- type: nauc_ndcg_at_5_max
value: 54.4721191074644
- type: nauc_ndcg_at_5_std
value: 6.577620935495792
- type: nauc_precision_at_1000_diff1
value: -34.15207795410865
- type: nauc_precision_at_1000_max
value: 19.192406477803747
- type: nauc_precision_at_1000_std
value: 44.20120249056698
- type: nauc_precision_at_100_diff1
value: -21.92421802281828
- type: nauc_precision_at_100_max
value: 27.932025006196444
- type: nauc_precision_at_100_std
value: 46.15700787499129
- type: nauc_precision_at_10_diff1
value: 1.4405770914568594
- type: nauc_precision_at_10_max
value: 39.638084561158536
- type: nauc_precision_at_10_std
value: 36.69460260973796
- type: nauc_precision_at_1_diff1
value: 76.49718519036318
- type: nauc_precision_at_1_max
value: 54.12604217431032
- type: nauc_precision_at_1_std
value: 8.333140302649584
- type: nauc_precision_at_20_diff1
value: -9.073464951503986
- type: nauc_precision_at_20_max
value: 33.43558333269937
- type: nauc_precision_at_20_std
value: 43.649313315759635
- type: nauc_precision_at_3_diff1
value: 33.24438747635695
- type: nauc_precision_at_3_max
value: 49.669129551161866
- type: nauc_precision_at_3_std
value: 20.597427388463906
- type: nauc_precision_at_5_diff1
value: 14.390391464956412
- type: nauc_precision_at_5_max
value: 42.21194236044368
- type: nauc_precision_at_5_std
value: 27.341151685288402
- type: nauc_recall_at_1000_diff1
value: -13.439275396098257
- type: nauc_recall_at_1000_max
value: 70.2668332789378
- type: nauc_recall_at_1000_std
value: 81.47725384292593
- type: nauc_recall_at_100_diff1
value: 63.12484158375845
- type: nauc_recall_at_100_max
value: 78.21397899681712
- type: nauc_recall_at_100_std
value: 47.95971895328952
- type: nauc_recall_at_10_diff1
value: 59.258619066241124
- type: nauc_recall_at_10_max
value: 55.72780924365118
- type: nauc_recall_at_10_std
value: 12.070465110706309
- type: nauc_recall_at_1_diff1
value: 76.44362554275227
- type: nauc_recall_at_1_max
value: 47.92837030943323
- type: nauc_recall_at_1_std
value: 1.2712665978711795
- type: nauc_recall_at_20_diff1
value: 60.27194163739572
- type: nauc_recall_at_20_max
value: 57.859640930044556
- type: nauc_recall_at_20_std
value: 24.959871261637183
- type: nauc_recall_at_3_diff1
value: 63.809558015026404
- type: nauc_recall_at_3_max
value: 50.68780898644539
- type: nauc_recall_at_3_std
value: 0.37064353382673126
- type: nauc_recall_at_5_diff1
value: 61.34563891446967
- type: nauc_recall_at_5_max
value: 52.02870480839336
- type: nauc_recall_at_5_std
value: 3.3678431493557657
- type: ndcg_at_1
value: 65.019
- type: ndcg_at_10
value: 76.527
- type: ndcg_at_100
value: 78.476
- type: ndcg_at_1000
value: 78.859
- type: ndcg_at_20
value: 77.608
- type: ndcg_at_3
value: 72.237
- type: ndcg_at_5
value: 74.578
- type: precision_at_1
value: 65.019
- type: precision_at_10
value: 9.963
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.235
- type: precision_at_3
value: 28.224
- type: precision_at_5
value: 18.541
- type: recall_at_1
value: 62.159
- type: recall_at_10
value: 88.177
- type: recall_at_100
value: 96.70400000000001
- type: recall_at_1000
value: 99.629
- type: recall_at_20
value: 92.171
- type: recall_at_3
value: 76.98
- type: recall_at_5
value: 82.39800000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID (default)
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: main_score
value: 85.786
- type: map_at_1
value: 0.241
- type: map_at_10
value: 2.2560000000000002
- type: map_at_100
value: 13.478000000000002
- type: map_at_1000
value: 32.080999999999996
- type: map_at_20
value: 4.034
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.202
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 95.66666666666666
- type: mrr_at_100
value: 95.66666666666666
- type: mrr_at_1000
value: 95.66666666666666
- type: mrr_at_20
value: 95.66666666666666
- type: mrr_at_3
value: 95.66666666666666
- type: mrr_at_5
value: 95.66666666666666
- type: nauc_map_at_1000_diff1
value: -33.856397215348224
- type: nauc_map_at_1000_max
value: 52.442628978801686
- type: nauc_map_at_1000_std
value: 78.121550023329
- type: nauc_map_at_100_diff1
value: -24.62901955392776
- type: nauc_map_at_100_max
value: 23.848254681406715
- type: nauc_map_at_100_std
value: 44.891168295557435
- type: nauc_map_at_10_diff1
value: 8.624081477851847
- type: nauc_map_at_10_max
value: -9.045454596970382
- type: nauc_map_at_10_std
value: -5.7784874943617375
- type: nauc_map_at_1_diff1
value: 17.522197196988433
- type: nauc_map_at_1_max
value: -9.591987859324789
- type: nauc_map_at_1_std
value: -7.711185842864
- type: nauc_map_at_20_diff1
value: -0.3901783306886495
- type: nauc_map_at_20_max
value: -2.061541912435094
- type: nauc_map_at_20_std
value: 5.1798742009931
- type: nauc_map_at_3_diff1
value: 13.263750752688159
- type: nauc_map_at_3_max
value: -9.833822942004682
- type: nauc_map_at_3_std
value: -9.816054237663943
- type: nauc_map_at_5_diff1
value: 11.492446526529632
- type: nauc_map_at_5_max
value: -10.413949409485241
- type: nauc_map_at_5_std
value: -11.239134010710497
- type: nauc_mrr_at_1000_diff1
value: -31.20376355670401
- type: nauc_mrr_at_1000_max
value: 46.59197012138196
- type: nauc_mrr_at_1000_std
value: 80.28442146089233
- type: nauc_mrr_at_100_diff1
value: -31.20376355670401
- type: nauc_mrr_at_100_max
value: 46.59197012138196
- type: nauc_mrr_at_100_std
value: 80.28442146089233
- type: nauc_mrr_at_10_diff1
value: -31.20376355670401
- type: nauc_mrr_at_10_max
value: 46.59197012138196
- type: nauc_mrr_at_10_std
value: 80.28442146089233
- type: nauc_mrr_at_1_diff1
value: -29.108309990663138
- type: nauc_mrr_at_1_max
value: 43.23062558356683
- type: nauc_mrr_at_1_std
value: 78.64145658263308
- type: nauc_mrr_at_20_diff1
value: -31.20376355670401
- type: nauc_mrr_at_20_max
value: 46.59197012138196
- type: nauc_mrr_at_20_std
value: 80.28442146089233
- type: nauc_mrr_at_3_diff1
value: -31.20376355670401
- type: nauc_mrr_at_3_max
value: 46.59197012138196
- type: nauc_mrr_at_3_std
value: 80.28442146089233
- type: nauc_mrr_at_5_diff1
value: -31.20376355670401
- type: nauc_mrr_at_5_max
value: 46.59197012138196
- type: nauc_mrr_at_5_std
value: 80.28442146089233
- type: nauc_ndcg_at_1000_diff1
value: -30.02494733757554
- type: nauc_ndcg_at_1000_max
value: 46.879741543484386
- type: nauc_ndcg_at_1000_std
value: 71.28860776857371
- type: nauc_ndcg_at_100_diff1
value: -40.382758704499686
- type: nauc_ndcg_at_100_max
value: 46.81853301905501
- type: nauc_ndcg_at_100_std
value: 78.08882504276026
- type: nauc_ndcg_at_10_diff1
value: -37.9762225498264
- type: nauc_ndcg_at_10_max
value: 33.818776701290645
- type: nauc_ndcg_at_10_std
value: 60.60876378870803
- type: nauc_ndcg_at_1_diff1
value: -29.64995269631029
- type: nauc_ndcg_at_1_max
value: 11.702932828760678
- type: nauc_ndcg_at_1_std
value: 46.36707663197732
- type: nauc_ndcg_at_20_diff1
value: -34.21566964686303
- type: nauc_ndcg_at_20_max
value: 35.71546714747097
- type: nauc_ndcg_at_20_std
value: 64.96478634285614
- type: nauc_ndcg_at_3_diff1
value: -40.87606957878375
- type: nauc_ndcg_at_3_max
value: 34.266783345764296
- type: nauc_ndcg_at_3_std
value: 59.417588176302125
- type: nauc_ndcg_at_5_diff1
value: -40.86776131403312
- type: nauc_ndcg_at_5_max
value: 32.103157304099696
- type: nauc_ndcg_at_5_std
value: 53.26187123017394
- type: nauc_precision_at_1000_diff1
value: -27.155383361683644
- type: nauc_precision_at_1000_max
value: 47.99609392284812
- type: nauc_precision_at_1000_std
value: 53.130872385717154
- type: nauc_precision_at_100_diff1
value: -44.040520753793835
- type: nauc_precision_at_100_max
value: 49.40807778768706
- type: nauc_precision_at_100_std
value: 76.68780066667708
- type: nauc_precision_at_10_diff1
value: -38.63910231606874
- type: nauc_precision_at_10_max
value: 42.93405560776088
- type: nauc_precision_at_10_std
value: 66.83323199380891
- type: nauc_precision_at_1_diff1
value: -29.108309990663138
- type: nauc_precision_at_1_max
value: 43.23062558356683
- type: nauc_precision_at_1_std
value: 78.64145658263308
- type: nauc_precision_at_20_diff1
value: -35.962158439352734
- type: nauc_precision_at_20_max
value: 36.22370294628403
- type: nauc_precision_at_20_std
value: 65.49049101917842
- type: nauc_precision_at_3_diff1
value: -53.11469565992303
- type: nauc_precision_at_3_max
value: 62.111220033865045
- type: nauc_precision_at_3_std
value: 67.69895731218259
- type: nauc_precision_at_5_diff1
value: -53.04735248757662
- type: nauc_precision_at_5_max
value: 60.29588164734101
- type: nauc_precision_at_5_std
value: 61.332609813217566
- type: nauc_recall_at_1000_diff1
value: -26.68853089093055
- type: nauc_recall_at_1000_max
value: 40.15392752238839
- type: nauc_recall_at_1000_std
value: 58.18451441165892
- type: nauc_recall_at_100_diff1
value: -15.581247880461934
- type: nauc_recall_at_100_max
value: 10.81212430083709
- type: nauc_recall_at_100_std
value: 27.018420696008477
- type: nauc_recall_at_10_diff1
value: 11.246082508546243
- type: nauc_recall_at_10_max
value: -13.581652280948264
- type: nauc_recall_at_10_std
value: -11.980214279022423
- type: nauc_recall_at_1_diff1
value: 17.522197196988433
- type: nauc_recall_at_1_max
value: -9.591987859324789
- type: nauc_recall_at_1_std
value: -7.711185842864
- type: nauc_recall_at_20_diff1
value: 4.890473144429516
- type: nauc_recall_at_20_max
value: -8.848258614984216
- type: nauc_recall_at_20_std
value: -4.194164888978863
- type: nauc_recall_at_3_diff1
value: 13.525152290557976
- type: nauc_recall_at_3_max
value: -13.266833552882778
- type: nauc_recall_at_3_std
value: -14.734712973008559
- type: nauc_recall_at_5_diff1
value: 12.38086304308239
- type: nauc_recall_at_5_max
value: -14.125430291797542
- type: nauc_recall_at_5_std
value: -16.303159417191377
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.786
- type: ndcg_at_100
value: 65.689
- type: ndcg_at_1000
value: 57.51500000000001
- type: ndcg_at_20
value: 81.291
- type: ndcg_at_3
value: 89.531
- type: ndcg_at_5
value: 88.435
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 90.0
- type: precision_at_100
value: 67.64
- type: precision_at_1000
value: 25.422
- type: precision_at_20
value: 84.89999999999999
- type: precision_at_3
value: 92.667
- type: precision_at_5
value: 93.2
- type: recall_at_1
value: 0.241
- type: recall_at_10
value: 2.37
- type: recall_at_100
value: 16.242
- type: recall_at_1000
value: 53.702000000000005
- type: recall_at_20
value: 4.343
- type: recall_at_3
value: 0.744
- type: recall_at_5
value: 1.248
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020 (default)
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: main_score
value: 30.676
- type: map_at_1
value: 3.17
- type: map_at_10
value: 12.838
- type: map_at_100
value: 19.455
- type: map_at_1000
value: 21.096999999999998
- type: map_at_20
value: 15.781
- type: map_at_3
value: 6.938
- type: map_at_5
value: 9.324
- type: mrr_at_1
value: 38.775510204081634
- type: mrr_at_10
value: 54.38208616780046
- type: mrr_at_100
value: 54.88429833086117
- type: mrr_at_1000
value: 54.88429833086117
- type: mrr_at_20
value: 54.69357918606039
- type: mrr_at_3
value: 51.02040816326531
- type: mrr_at_5
value: 52.44897959183673
- type: nauc_map_at_1000_diff1
value: 11.768546469752255
- type: nauc_map_at_1000_max
value: -6.234751836059205
- type: nauc_map_at_1000_std
value: -0.5086610596792738
- type: nauc_map_at_100_diff1
value: 12.210218562618925
- type: nauc_map_at_100_max
value: -7.479895692892787
- type: nauc_map_at_100_std
value: -3.9456755950311653
- type: nauc_map_at_10_diff1
value: 17.872233692928692
- type: nauc_map_at_10_max
value: -1.4391736686946837
- type: nauc_map_at_10_std
value: -19.04083165317906
- type: nauc_map_at_1_diff1
value: 26.952695929538866
- type: nauc_map_at_1_max
value: -23.861150686867994
- type: nauc_map_at_1_std
value: -36.57857926974273
- type: nauc_map_at_20_diff1
value: 15.79525205450058
- type: nauc_map_at_20_max
value: -5.818581673388666
- type: nauc_map_at_20_std
value: -14.222828899523332
- type: nauc_map_at_3_diff1
value: 24.296906628915092
- type: nauc_map_at_3_max
value: -3.075381662286569
- type: nauc_map_at_3_std
value: -25.324259455516085
- type: nauc_map_at_5_diff1
value: 23.81656417505337
- type: nauc_map_at_5_max
value: -3.736702154899666
- type: nauc_map_at_5_std
value: -25.914105892424722
- type: nauc_mrr_at_1000_diff1
value: 17.59241956039767
- type: nauc_mrr_at_1000_max
value: -33.70575077889871
- type: nauc_mrr_at_1000_std
value: -31.563016486948225
- type: nauc_mrr_at_100_diff1
value: 17.59241956039767
- type: nauc_mrr_at_100_max
value: -33.70575077889871
- type: nauc_mrr_at_100_std
value: -31.563016486948225
- type: nauc_mrr_at_10_diff1
value: 16.7444853592715
- type: nauc_mrr_at_10_max
value: -34.67620993606911
- type: nauc_mrr_at_10_std
value: -30.36717732372874
- type: nauc_mrr_at_1_diff1
value: 24.89375000365368
- type: nauc_mrr_at_1_max
value: -30.815417372385873
- type: nauc_mrr_at_1_std
value: -44.687809069434245
- type: nauc_mrr_at_20_diff1
value: 17.80682781563912
- type: nauc_mrr_at_20_max
value: -33.65132043726252
- type: nauc_mrr_at_20_std
value: -30.788168935299247
- type: nauc_mrr_at_3_diff1
value: 16.98952594458621
- type: nauc_mrr_at_3_max
value: -31.87405417907046
- type: nauc_mrr_at_3_std
value: -32.99668568417734
- type: nauc_mrr_at_5_diff1
value: 17.692734228351465
- type: nauc_mrr_at_5_max
value: -31.478014354340267
- type: nauc_mrr_at_5_std
value: -34.27625710571425
- type: nauc_ndcg_at_1000_diff1
value: 7.2521145392859925
- type: nauc_ndcg_at_1000_max
value: -11.879052032552305
- type: nauc_ndcg_at_1000_std
value: 16.868276570948492
- type: nauc_ndcg_at_100_diff1
value: 9.68273273743821
- type: nauc_ndcg_at_100_max
value: -19.509766471983163
- type: nauc_ndcg_at_100_std
value: 10.902137038006767
- type: nauc_ndcg_at_10_diff1
value: 15.249688997310848
- type: nauc_ndcg_at_10_max
value: -10.630040416461807
- type: nauc_ndcg_at_10_std
value: -12.375334439103657
- type: nauc_ndcg_at_1_diff1
value: 23.20606123961159
- type: nauc_ndcg_at_1_max
value: -29.329783979356527
- type: nauc_ndcg_at_1_std
value: -44.10128294915467
- type: nauc_ndcg_at_20_diff1
value: 13.146989938292835
- type: nauc_ndcg_at_20_max
value: -17.320226384710132
- type: nauc_ndcg_at_20_std
value: -9.593117671485109
- type: nauc_ndcg_at_3_diff1
value: 18.262720339591553
- type: nauc_ndcg_at_3_max
value: -10.618248628559396
- type: nauc_ndcg_at_3_std
value: -24.069451775959436
- type: nauc_ndcg_at_5_diff1
value: 23.015053471568216
- type: nauc_ndcg_at_5_max
value: -7.6818187454174485
- type: nauc_ndcg_at_5_std
value: -23.610640745384508
- type: nauc_precision_at_1000_diff1
value: -21.295596373775506
- type: nauc_precision_at_1000_max
value: 33.313558338532154
- type: nauc_precision_at_1000_std
value: 36.00306839548485
- type: nauc_precision_at_100_diff1
value: -8.17984508673104
- type: nauc_precision_at_100_max
value: -3.5218633922770186
- type: nauc_precision_at_100_std
value: 64.06409459764816
- type: nauc_precision_at_10_diff1
value: 9.669119653314857
- type: nauc_precision_at_10_max
value: -7.486292775323736
- type: nauc_precision_at_10_std
value: 6.05291075028193
- type: nauc_precision_at_1_diff1
value: 24.89375000365368
- type: nauc_precision_at_1_max
value: -30.815417372385873
- type: nauc_precision_at_1_std
value: -44.687809069434245
- type: nauc_precision_at_20_diff1
value: 5.612232465910688
- type: nauc_precision_at_20_max
value: -9.493221506431967
- type: nauc_precision_at_20_std
value: 21.580627790601074
- type: nauc_precision_at_3_diff1
value: 17.374772867960296
- type: nauc_precision_at_3_max
value: -5.4513905841762496
- type: nauc_precision_at_3_std
value: -18.247738169868203
- type: nauc_precision_at_5_diff1
value: 24.856012104520754
- type: nauc_precision_at_5_max
value: -1.689335249747221
- type: nauc_precision_at_5_std
value: -17.759731374287938
- type: nauc_recall_at_1000_diff1
value: -16.083745923678773
- type: nauc_recall_at_1000_max
value: -6.4871691773402285
- type: nauc_recall_at_1000_std
value: 72.67593737144807
- type: nauc_recall_at_100_diff1
value: -2.2459215656431395
- type: nauc_recall_at_100_max
value: -22.74818872908392
- type: nauc_recall_at_100_std
value: 32.77497339706697
- type: nauc_recall_at_10_diff1
value: 8.670501799477833
- type: nauc_recall_at_10_max
value: -9.585611028753716
- type: nauc_recall_at_10_std
value: -10.351304338231115
- type: nauc_recall_at_1_diff1
value: 26.952695929538866
- type: nauc_recall_at_1_max
value: -23.861150686867994
- type: nauc_recall_at_1_std
value: -36.57857926974273
- type: nauc_recall_at_20_diff1
value: 8.556995668015755
- type: nauc_recall_at_20_max
value: -17.78731664551538
- type: nauc_recall_at_20_std
value: -2.6521355533836433
- type: nauc_recall_at_3_diff1
value: 21.343842933377587
- type: nauc_recall_at_3_max
value: -2.6294436308829456
- type: nauc_recall_at_3_std
value: -21.662684580036945
- type: nauc_recall_at_5_diff1
value: 20.98116651540531
- type: nauc_recall_at_5_max
value: -6.952288993104518
- type: nauc_recall_at_5_std
value: -24.78098743592733
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 30.676
- type: ndcg_at_100
value: 41.345
- type: ndcg_at_1000
value: 52.586
- type: ndcg_at_20
value: 31.176
- type: ndcg_at_3
value: 35.467
- type: ndcg_at_5
value: 32.784
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 27.346999999999998
- type: precision_at_100
value: 8.265
- type: precision_at_1000
value: 1.58
- type: precision_at_20
value: 20.51
- type: precision_at_3
value: 38.775999999999996
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.17
- type: recall_at_10
value: 19.188
- type: recall_at_100
value: 50.775000000000006
- type: recall_at_1000
value: 85.392
- type: recall_at_20
value: 28.061000000000003
- type: recall_at_3
value: 7.949000000000001
- type: recall_at_5
value: 11.863
task:
type: Retrieval
tags:
- mteb
license: mit
---
<h1 align="center">Combination of Embedding Models: <a href="https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5">Arctic M (v1.5)</a> & <a href="https://huggingface.co/BAAI/bge-small-en-v1.5">BGE Small (en; v1.5)</a></h1>
<h4 align="center">
<p>
<a href="#acknowledgement">Acknowledgement</a> |
<a href=#combination-of-embedding-models>Combination of Embedding Models</a> |
<a href=#usage>Usage</a> |
<a href=#citation>Citation</a> |
<a href=#license>License</a>
<p>
</h4>
## Acknowledgement
First of all, we want to acknowledge the original creators of the [Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) and [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) models which are used to create this model. Our model is just a combination of these two models, and we have not made any changes to the original models.
Furthermore, we want to acknowledge the team of Marqo, who has worked on the idea of combining two models through concatenation in parallel to ourselves. Their initial effort allowed to re-use existing pieces of code, in particular the [modeling script](https://huggingface.co/PaDaS-Lab/arctic-m-bge-small/blob/main/modeling_arctic_m_bge_small.py) for bringing the combined model to HuggingFace.
## Combination of Embedding Models
### Overview
Embedding models have become increasingly powerful and applicable across various use cases. However, the next significant challenge lies in enhancing their efficiency in terms of resource consumption. Our goal is to experiment with combining two embedding models to achieve better performance with fewer resources.
### Key Insights
1. **Diversity Matters**: Initial findings suggest that combining models with differing characteristics can complement each other, resulting in improved outcomes. To design an effective combination, the diversity of the models—evaluated by factors like MTEB performance, architecture, and training data—is crucial.
2. **Combination Technique**:
- We combine the embeddings of two models using the most straightforward approach: concatenation.
- Prior to concatenation, we normalize the embeddings to ensure they are on the same scale. This step is vital for achieving coherent and meaningful results.
### Implementation
We combined the following models:
- **[Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5)**
- **[BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)**
#### Model Details
- **Output Embedding Dimensions**: 1152 (768 + 384)
- **Total Parameters**: 142M (109M + 33M)
### Results
This combination demonstrated notable performance on the **MTEB Leaderboard**, offering a promising foundation for further experimentation:
- **Performance Improvement**: The average nDCG@10 on the MTEB English Retrieval benchmark increased from **55.14 to 56.5**, climbing several spots on the leaderboard—a feat often requiring extensive engineering efforts.
- **Comparison with Chimera Model**:
Interestingly, the **[Chimera model](https://huggingface.co/Marqo/marqo-chimera-arctic-bge-m)**, which employs more potent models individually, performs worse on the leaderboard. This raises questions about:
- The role of parameter count.
- Differences in training processes.
- How effectively two models complement each other for specific benchmark tasks.
### Future Directions
While the results are promising, we acknowledge the complexity of model combinations and the importance of focusing on more than leaderboard rankings. The simplicity of concatenating embeddings yielding tangible gains emphasizes the potential for further exploration in this area.
We look forward to conducting additional experiments and engaging in discussions to deepen our understanding of effective model combinations.
## Usage
```python
import numpy as np
import torch
from torch.utils.data import DataLoader
from transformers import AutoModel, AutoTokenizer, PreTrainedTokenizerFast, BatchEncoding, DataCollatorWithPadding
from functools import partial
from datasets import Dataset
from tqdm import tqdm
from typing import *
NUM_WORKERS = 4
BATCH_SIZE = 32
def transform_func(tokenizer: PreTrainedTokenizerFast,
max_length: int,
examples: Dict[str, List]) -> BatchEncoding:
return tokenizer(examples['contents'],
max_length=max_length,
padding=True,
return_token_type_ids=False,
truncation=True)
def move_to_cuda(sample):
if len(sample) == 0:
return {}
def _move_to_cuda(maybe_tensor):
if torch.is_tensor(maybe_tensor):
return maybe_tensor.cuda(non_blocking=True)
elif isinstance(maybe_tensor, dict):
return {key: _move_to_cuda(value) for key, value in maybe_tensor.items()}
elif isinstance(maybe_tensor, list):
return [_move_to_cuda(x) for x in maybe_tensor]
elif isinstance(maybe_tensor, tuple):
return tuple([_move_to_cuda(x) for x in maybe_tensor])
elif isinstance(maybe_tensor, Mapping):
return type(maybe_tensor)({k: _move_to_cuda(v) for k, v in maybe_tensor.items()})
else:
return maybe_tensor
return _move_to_cuda(sample)
class RetrievalModel():
def __init__(self, pretrained_model_name: str, **kwargs):
self.pretrained_model_name = pretrained_model_name
self.encoder = AutoModel.from_pretrained(pretrained_model_name, trust_remote_code=True)
self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, trust_remote_code=True)
self.gpu_count = torch.cuda.device_count()
self.batch_size = BATCH_SIZE
self.query_instruction = 'Represent this sentence for searching relevant passages: {}'
self.document_instruction = '{}'
self.pool_type = 'cls'
self.max_length = 512
self.encoder.cuda()
self.encoder.eval()
def encode_queries(self, queries: List[str], **kwargs) -> np.ndarray:
input_texts = [self.query_instruction.format(q) for q in queries]
return self._do_encode(input_texts)
def encode_corpus(self, corpus: List[Dict[str, str]], **kwargs) -> np.ndarray:
input_texts = [self.document_instruction.format('{} {}'.format(d.get('title', ''), d['text']).strip()) for d in corpus]
return self._do_encode(input_texts)
@torch.no_grad()
def _do_encode(self, input_texts: List[str]) -> np.ndarray:
dataset: Dataset = Dataset.from_dict({'contents': input_texts})
dataset.set_transform(partial(transform_func, self.tokenizer, self.max_length))
data_collator = DataCollatorWithPadding(self.tokenizer, pad_to_multiple_of=8)
data_loader = DataLoader(
dataset,
batch_size=self.batch_size * self.gpu_count,
shuffle=False,
drop_last=False,
num_workers=NUM_WORKERS,
collate_fn=data_collator,
pin_memory=True)
encoded_embeds = []
for batch_dict in tqdm(data_loader, desc='encoding', mininterval=10):
batch_dict = move_to_cuda(batch_dict)
with torch.amp.autocast('cuda'):
outputs = self.encoder(**batch_dict)
encoded_embeds.append(outputs.cpu().numpy())
return np.concatenate(encoded_embeds, axis=0)
model = RetrievalModel('PaDaS-Lab/arctic-m-bge-small')
embeds_q = model.encode_queries(['What is the capital of France?'])
# [[-0.01099197 -0.08366653 0.0060241 ... 0.03182805 -0.00674182 0.058571 ]]
embeds_d = model.encode_corpus([{'title': 'Paris', 'text': 'Paris is the capital of France.'}])
# [[ 0.0391828 -0.02951912 0.10862264 ... -0.05373885 -0.00368348 0.02323797]]
```
### Libraries
```
torch==2.5.0
transformers==4.42.3
mteb==1.12.94
```
## Citation
```bibtex
@misc{https://doi.org/10.48550/arxiv.2407.08275,
doi = {10.48550/ARXIV.2407.08275},
url = {https://arxiv.org/abs/2407.08275},
author = {Caspari, Laura and Dastidar, Kanishka Ghosh and Zerhoudi, Saber and Mitrovic, Jelena and Granitzer, Michael},
title = {Beyond Benchmarks: Evaluating Embedding Model Similarity for Retrieval Augmented Generation Systems},
year = {2024},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## License
Notice that Arctic M (v1.5) is licensed under [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) and BGE Small (en; v1.5) is licensed under [MIT](https://opensource.org/licenses/MIT) license. Please refer to the licenses of the original models for more details.
|
mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF
|
mradermacher
| 2024-11-15T23:17:11Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Dampfinchen/Llama-3.1-8B-Ultra-Instruct",
"base_model:quantized:Dampfinchen/Llama-3.1-8B-Ultra-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T22:58:56Z |
---
base_model: Dampfinchen/Llama-3.1-8B-Ultra-Instruct
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Dampfinchen/Llama-3.1-8B-Ultra-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3.1-8B-Ultra-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf
|
RichardErkhov
| 2024-11-15T23:01:23Z | 9 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T21:31:14Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3.5-mini-instruct-text2sql - GGUF
- Model creator: https://huggingface.co/cahaj/
- Original model: https://huggingface.co/cahaj/Phi-3.5-mini-instruct-text2sql/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3.5-mini-instruct-text2sql.Q2_K.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi-3.5-mini-instruct-text2sql.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3.5-mini-instruct-text2sql.Q3_K.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi-3.5-mini-instruct-text2sql.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi-3.5-mini-instruct-text2sql.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi-3.5-mini-instruct-text2sql.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3.5-mini-instruct-text2sql.Q4_0.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3.5-mini-instruct-text2sql.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3.5-mini-instruct-text2sql.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3.5-mini-instruct-text2sql.Q4_K.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi-3.5-mini-instruct-text2sql.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi-3.5-mini-instruct-text2sql.Q4_1.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3.5-mini-instruct-text2sql.Q5_0.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3.5-mini-instruct-text2sql.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3.5-mini-instruct-text2sql.Q5_K.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi-3.5-mini-instruct-text2sql.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi-3.5-mini-instruct-text2sql.Q5_1.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3.5-mini-instruct-text2sql.Q6_K.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi-3.5-mini-instruct-text2sql.Q8_0.gguf](https://huggingface.co/RichardErkhov/cahaj_-_Phi-3.5-mini-instruct-text2sql-gguf/blob/main/Phi-3.5-mini-instruct-text2sql.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
base_model: microsoft/Phi-3.5-mini-instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** cahaj
- **License:** apache-2.0
- **Finetuned from model :** microsoft/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlekseyCalvin/Vladimir_Sillov_SilverAgePoets_FLUX_LoRA
|
AlekseyCalvin
| 2024-11-15T22:56:37Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-14T06:37:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: poet Vladimir Sillov
widget:
- text: >-
A photo of Soviet poet Vladimir Sillov in bed at dawn in USSR circa 1923. Sillov, in his mid-20s, young, best quality. Medium frame. Moderately worn, textured skin with blemishes and pores, extremely detailed color photograph.
output:
url: SillovBed2.png
- text: >-
A photo of Soviet poet Vladimir Sillov in bed in USSR circa 1923. Sillov, in his mid-20s, young, best quality. Medium frame. Moderately worn, textured skin with blemishes and pores, extremely detailed color photograph.
output:
url: SillovBed3.png
- text: >-
A photo of Soviet poet Vladimir Sillov, color photograph...
output:
url: SillovSickle.webp
- text: >-
A photo of Soviet poet Vladimir Sillov walking in a stairwell in USSR circa 1923 and saying via text balloon: "Days Would Walk an Untrod Morbid Staircase...", then under it another text balloon: "...At accelerant pace!" Sillov, in his mid-20s, young, is in a hurry, best quality. Medium frame. Moderately worn, textured skin with blemishes and pores, extremely detailed color photograph.
output:
url: Sillov_1_DaysWouldWalk.png
---
# Poet Vladimir Sillov Flux
A **Low Rank Adaptor (LoRA)** for **FLUX** Text2Image models, trained to reincarnate the likeness of the poet, literary scholar, critic, editor, screenwriter, anthologist, progressive sociocultural activist, life-partner of Petrovskaya, student & biographer of Khlebnikov, friend of Pasternak, Mayakovskiy, Burlyuk, Aseev, Tretiakov, & many others, as well as a tragic and unforgotten avatar of all that could've been and what sometimes actually was: <br>
**Vladimir Sillov** *(b.1901-d.02/16/1930)*. <br>
<Gallery />
Unfortunately, Sillov, as of yet does not have a Wikipedia (at least not in English/Worldish)...
We hope this sad fact is corrected one day. <br>
For now, here's a clip of a reincarnated/approximated iteration of the poet, performing "live" (per our translation/interpretation/adaptation): <br>
[CLICK HERE TO WATCH THE CLIP ON YOUTUBE](https://youtu.be/paffYoQpAq4?si=EMQW2zM3IhdqfWVr)
Plus one of our translations from Sillov. More will be posted soon at [www.SilverAgePoets.com](www.silveragepoets.com): <br>
**UNTIL IT DAWNS ANEW**
Days<br>
Would walk <br>
An untrod morbid staircase<br>
At an accelerant pace.<br>
Soon <br>
The trees <br>
Splinter off unto leaflessness,<br>
All the clearer it makes:<br>
When the spring<br>
Times the poets still nibble on<br>
Are abruptly<br>
Pulled down; <br>
With the sun, <br>
A blotched face nothing beams upon, <br>
They come down <br>
Like a crown. <br>
And this sun with its springs <br>
To the market we’ll bring, <br>
Hoist them over thru tussle and din, <br>
And for five faded roubles <br>
Toss them <br>
Off to some antiquarian. <br>
Souls spat on, slandered, <br>
Insolent, headstrong,<br>
Altars do strew.<br>
Upon them we'd light <br>
Lamps for vesper nights, <br>
Until it dawns anew. <br>
Find our translations of other poets [over at SilverAgePoets.com](https://www.silveragepoets.com)! <br>
In the coming weeks, we will finally update the site with translations from the works of Sillov, his partner Olga Petrovskaya, a number of his above-mentioned friends, and many other dead poets! <br>
Beyond that, other forms of translations, adaptation, actions, ressurections, poeticizations, generations, and much else, coming soon; here, there, and elsewhere! <br>
## Evocation-Charged Word
With FLUX running & this LoRA activated, include the name `Sillov` or 'Vladimir Sillov' or 'Poet Vladimir Sillov' in any prompt to conjure the long-deathless poet.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AlekseyCalvin/Vladimir_Sillov_SilverAgePoets_FLUX_LoRA', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/EVA-Tissint-14B-i1-GGUF
|
mradermacher
| 2024-11-15T22:44:11Z | 76 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ockerman0/EVA-Tissint-14B",
"base_model:quantized:ockerman0/EVA-Tissint-14B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T12:01:54Z |
---
base_model: ockerman0/EVA-Tissint-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ockerman0/EVA-Tissint-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF/resolve/main/EVA-Tissint-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
masafresh/swin-transformer2
|
masafresh
| 2024-11-15T22:43:28Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-large-patch4-window12-384",
"base_model:finetune:microsoft/swin-large-patch4-window12-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-15T18:53:12Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-large-patch4-window12-384
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: swin-transformer2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-transformer2
This model is a fine-tuned version of [microsoft/swin-large-patch4-window12-384](https://huggingface.co/microsoft/swin-large-patch4-window12-384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2129
- Accuracy: 0.6386
- F1: 0.6328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 1.6336 | 0.9840 | 46 | 1.6510 | 0.2530 | 0.1876 |
| 1.2894 | 1.9893 | 93 | 1.2218 | 0.4458 | 0.3780 |
| 1.0959 | 2.9947 | 140 | 1.1383 | 0.5060 | 0.3518 |
| 1.0467 | 4.0 | 187 | 0.9372 | 0.5542 | 0.4352 |
| 0.9879 | 4.9840 | 233 | 1.0139 | 0.5301 | 0.4718 |
| 0.9086 | 5.9893 | 280 | 0.8822 | 0.6627 | 0.6359 |
| 0.9776 | 6.9947 | 327 | 1.0269 | 0.5542 | 0.5139 |
| 0.9715 | 8.0 | 374 | 0.7964 | 0.5663 | 0.5588 |
| 0.9049 | 8.9840 | 420 | 0.7839 | 0.5904 | 0.5346 |
| 0.8697 | 9.9893 | 467 | 1.0379 | 0.5663 | 0.4921 |
| 0.882 | 10.9947 | 514 | 0.9132 | 0.5663 | 0.5379 |
| 0.832 | 12.0 | 561 | 0.8513 | 0.5783 | 0.5008 |
| 0.7475 | 12.9840 | 607 | 0.7612 | 0.6627 | 0.6427 |
| 0.9056 | 13.9893 | 654 | 0.8431 | 0.6145 | 0.5725 |
| 0.9978 | 14.9947 | 701 | 0.7221 | 0.7108 | 0.6983 |
| 0.6956 | 16.0 | 748 | 0.7545 | 0.6145 | 0.5888 |
| 0.7185 | 16.9840 | 794 | 0.6561 | 0.6627 | 0.6499 |
| 0.8139 | 17.9893 | 841 | 0.7512 | 0.6506 | 0.6386 |
| 0.6837 | 18.9947 | 888 | 0.6491 | 0.6988 | 0.6849 |
| 0.5191 | 20.0 | 935 | 0.7290 | 0.6386 | 0.6336 |
| 0.6538 | 20.9840 | 981 | 0.8000 | 0.6988 | 0.6621 |
| 0.7912 | 21.9893 | 1028 | 1.0183 | 0.6145 | 0.5824 |
| 0.6093 | 22.9947 | 1075 | 0.9124 | 0.6506 | 0.6396 |
| 0.5312 | 24.0 | 1122 | 0.9098 | 0.6024 | 0.5581 |
| 0.6654 | 24.9840 | 1168 | 1.0432 | 0.5422 | 0.5028 |
| 0.5798 | 25.9893 | 1215 | 0.7369 | 0.6627 | 0.6553 |
| 0.506 | 26.9947 | 1262 | 0.9057 | 0.6265 | 0.6236 |
| 0.4638 | 28.0 | 1309 | 0.7950 | 0.6867 | 0.6644 |
| 0.371 | 28.9840 | 1355 | 1.0368 | 0.6627 | 0.6473 |
| 0.4721 | 29.9893 | 1402 | 0.8129 | 0.6747 | 0.6673 |
| 0.54 | 30.9947 | 1449 | 1.0379 | 0.6627 | 0.6491 |
| 0.3978 | 32.0 | 1496 | 1.3857 | 0.5904 | 0.5481 |
| 0.3503 | 32.9840 | 1542 | 1.0920 | 0.6024 | 0.5847 |
| 0.4407 | 33.9893 | 1589 | 1.1912 | 0.5904 | 0.5505 |
| 0.3786 | 34.9947 | 1636 | 1.5071 | 0.6024 | 0.5915 |
| 0.3482 | 36.0 | 1683 | 1.1161 | 0.6386 | 0.6240 |
| 0.2695 | 36.9840 | 1729 | 1.2040 | 0.5904 | 0.5704 |
| 0.2296 | 37.9893 | 1776 | 1.5781 | 0.5181 | 0.4691 |
| 0.2922 | 38.9947 | 1823 | 1.3713 | 0.6024 | 0.5879 |
| 0.1511 | 40.0 | 1870 | 1.1638 | 0.6506 | 0.6553 |
| 0.2814 | 40.9840 | 1916 | 1.3384 | 0.6988 | 0.6939 |
| 0.2196 | 41.9893 | 1963 | 1.2872 | 0.6506 | 0.6330 |
| 0.2477 | 42.9947 | 2010 | 1.5322 | 0.6627 | 0.6375 |
| 0.3296 | 44.0 | 2057 | 1.3479 | 0.6506 | 0.6353 |
| 0.2015 | 44.9840 | 2103 | 1.2521 | 0.6145 | 0.6044 |
| 0.3476 | 45.9893 | 2150 | 1.2464 | 0.6747 | 0.6641 |
| 0.189 | 46.9947 | 2197 | 1.4480 | 0.6506 | 0.6235 |
| 0.1852 | 48.0 | 2244 | 1.3611 | 0.6747 | 0.6594 |
| 0.2798 | 48.9840 | 2290 | 1.4427 | 0.6988 | 0.6957 |
| 0.1523 | 49.9893 | 2337 | 1.3352 | 0.6506 | 0.6450 |
| 0.1224 | 50.9947 | 2384 | 1.8088 | 0.6386 | 0.6201 |
| 0.0926 | 52.0 | 2431 | 1.4695 | 0.6506 | 0.6296 |
| 0.2071 | 52.9840 | 2477 | 1.4673 | 0.6867 | 0.6806 |
| 0.1063 | 53.9893 | 2524 | 1.4862 | 0.7108 | 0.6975 |
| 0.1831 | 54.9947 | 2571 | 1.4666 | 0.6506 | 0.6161 |
| 0.158 | 56.0 | 2618 | 1.8832 | 0.6988 | 0.6673 |
| 0.26 | 56.9840 | 2664 | 1.5855 | 0.6386 | 0.5986 |
| 0.1697 | 57.9893 | 2711 | 1.2184 | 0.7470 | 0.7434 |
| 0.2024 | 58.9947 | 2758 | 1.3524 | 0.6867 | 0.6682 |
| 0.2495 | 60.0 | 2805 | 1.7523 | 0.6627 | 0.6427 |
| 0.1247 | 60.9840 | 2851 | 1.7007 | 0.6506 | 0.6372 |
| 0.1436 | 61.9893 | 2898 | 1.9171 | 0.6386 | 0.6120 |
| 0.1438 | 62.9947 | 2945 | 1.8998 | 0.6265 | 0.5897 |
| 0.1137 | 64.0 | 2992 | 2.4028 | 0.5904 | 0.5498 |
| 0.1619 | 64.9840 | 3038 | 1.7087 | 0.7470 | 0.7473 |
| 0.1105 | 65.9893 | 3085 | 1.6545 | 0.6988 | 0.6975 |
| 0.1597 | 66.9947 | 3132 | 1.8024 | 0.6747 | 0.6758 |
| 0.0338 | 68.0 | 3179 | 1.8962 | 0.6747 | 0.6706 |
| 0.1184 | 68.9840 | 3225 | 2.1642 | 0.7108 | 0.7102 |
| 0.0878 | 69.9893 | 3272 | 2.0974 | 0.6506 | 0.6610 |
| 0.0963 | 70.9947 | 3319 | 1.8719 | 0.7108 | 0.7162 |
| 0.0827 | 72.0 | 3366 | 1.7538 | 0.6988 | 0.7000 |
| 0.0933 | 72.9840 | 3412 | 1.9357 | 0.6988 | 0.6988 |
| 0.0593 | 73.9893 | 3459 | 1.9924 | 0.6506 | 0.6420 |
| 0.0423 | 74.9947 | 3506 | 2.2029 | 0.6627 | 0.6702 |
| 0.0311 | 76.0 | 3553 | 1.9236 | 0.7108 | 0.7155 |
| 0.1881 | 76.9840 | 3599 | 1.9606 | 0.6747 | 0.6787 |
| 0.0566 | 77.9893 | 3646 | 2.1122 | 0.6265 | 0.6206 |
| 0.0266 | 78.9947 | 3693 | 2.1469 | 0.6506 | 0.6536 |
| 0.1015 | 80.0 | 3740 | 2.0335 | 0.6506 | 0.6587 |
| 0.1083 | 80.9840 | 3786 | 2.2123 | 0.6506 | 0.6509 |
| 0.0161 | 81.9893 | 3833 | 2.3094 | 0.6988 | 0.7064 |
| 0.0194 | 82.9947 | 3880 | 2.3315 | 0.6145 | 0.6101 |
| 0.113 | 84.0 | 3927 | 2.5276 | 0.6867 | 0.6908 |
| 0.0653 | 84.9840 | 3973 | 2.0321 | 0.6265 | 0.6263 |
| 0.0684 | 85.9893 | 4020 | 2.0302 | 0.6627 | 0.6706 |
| 0.1724 | 86.9947 | 4067 | 2.5865 | 0.5904 | 0.5860 |
| 0.028 | 88.0 | 4114 | 2.3814 | 0.5904 | 0.5804 |
| 0.0528 | 88.9840 | 4160 | 2.2804 | 0.6386 | 0.6410 |
| 0.0341 | 89.9893 | 4207 | 2.0635 | 0.5783 | 0.5736 |
| 0.0074 | 90.9947 | 4254 | 2.3491 | 0.6024 | 0.5993 |
| 0.0165 | 92.0 | 4301 | 2.2152 | 0.6145 | 0.6036 |
| 0.0157 | 92.9840 | 4347 | 2.3380 | 0.6145 | 0.6036 |
| 0.0544 | 93.9893 | 4394 | 2.3319 | 0.6265 | 0.6221 |
| 0.0577 | 94.9947 | 4441 | 2.2671 | 0.6265 | 0.6221 |
| 0.1516 | 96.0 | 4488 | 2.2034 | 0.6265 | 0.6204 |
| 0.0318 | 96.9840 | 4534 | 2.1932 | 0.6265 | 0.6204 |
| 0.043 | 97.9893 | 4581 | 2.2178 | 0.6265 | 0.6204 |
| 0.0099 | 98.3957 | 4600 | 2.2129 | 0.6386 | 0.6328 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
kxbrow9/PoseyFLUX2
|
kxbrow9
| 2024-11-15T22:32:09Z | 6 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-15T22:31:26Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: PoseyFLUX2
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# PoseyFLUX2
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `PoseyFLUX2` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
neeleshg23/jamba-1.9b-7
|
neeleshg23
| 2024-11-15T22:30:18Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"jamba",
"text-generation",
"generated_from_trainer",
"base_model:neeleshg23/jamba-1.9b-6",
"base_model:finetune:neeleshg23/jamba-1.9b-6",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T10:41:42Z |
---
library_name: transformers
base_model: neeleshg23/jamba-1.9b-6
tags:
- generated_from_trainer
model-index:
- name: jamba-1.9b-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jamba-1.9b-7
This model is a fine-tuned version of [neeleshg23/jamba-1.9b-6](https://huggingface.co/neeleshg23/jamba-1.9b-6) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
plesniar/nhx_nec100_checkpoint
|
plesniar
| 2024-11-15T22:25:29Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-15T22:10:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
davidilag/wav2vec2-xls-r-1b-faroese-100h-30-epochs
|
davidilag
| 2024-11-15T22:17:39Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-14T22:28:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Farhang87/gemma-soap-best-merged
|
Farhang87
| 2024-11-15T22:09:38Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T22:06:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Orca-Hermes-7B-slerp-GGUF
|
mradermacher
| 2024-11-15T22:06:10Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"Open-Orca/Mistral-7B-OpenOrca",
"teknium/OpenHermes-2.5-Mistral-7B",
"en",
"base_model:cris177/Orca-Hermes-7B-slerp",
"base_model:quantized:cris177/Orca-Hermes-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T21:35:20Z |
---
base_model: cris177/Orca-Hermes-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- Open-Orca/Mistral-7B-OpenOrca
- teknium/OpenHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cris177/Orca-Hermes-7B-slerp
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-Hermes-7B-slerp-GGUF/resolve/main/Orca-Hermes-7B-slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/LLaMA-Pro-8B-i1-GGUF
|
mradermacher
| 2024-11-15T22:06:10Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TencentARC/LLaMA-Pro-8B",
"base_model:quantized:TencentARC/LLaMA-Pro-8B",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-15T20:29:50Z |
---
base_model: TencentARC/LLaMA-Pro-8B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TencentARC/LLaMA-Pro-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LLaMA-Pro-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-i1-GGUF/resolve/main/LLaMA-Pro-8B.i1-Q6_K.gguf) | i1-Q6_K | 7.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ProdeusUnity/Prismatic-12b-v0.1-Experimental-1115
|
ProdeusUnity
| 2024-11-15T22:04:57Z | 6 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T20:11:50Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Prismatic 12b v0.1 Experimental 11/15
## This is a fix for ChatML format, since before it did not have an EOS token
*The sparkling courage I longed for, what I got is small... My tears are surely the prism of tomorrow... Say "Hello!" to the ideal future, let's go see them~*
Listen to the song on youtube: https://www.youtube.com/watch?v=v3I6EVlyPx4
One off merge for a friend, though it came out rather good, I like it, so try it?
mistralai/Mistral-Nemo-Base-2407
inflatebot/MN-12b-Mag-Mell-R1
nbeerbower/Mistral-Nemo-Prism-12B-v5
License for this model Apache 2.0
Format: Mistral Tekken or ChatML
Thank you to AuriAetherwiing for helping me merge the models and for providing compute (A40).
Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the ties merge method using mistralai_Mistral-Nemo-Base-2407 as a base.
### Models Merged
Models Merged
The following models were included in the merge:
/inflatebot_MN-12B-Mag-Mell-R1
/nbeerbower_Mistral-Nemo-Prism-12B-v5
#### Configuration
The following YAML configuration was used to produce this model:
models:
- model: /inflatebot_MN-12B-Mag-Mell-R1
parameters:
weight: 0.3
density: 0.5
- model: /nbeerbower_Mistral-Nemo-Prism-12B-v5
parameters:
weight: 0.4
density: 0.75
base_model: /mistralai_Mistral-Nemo-Base-2407
parameters:
epsilon: 0.05
normalize: true
lambda: 1
merge_method: ties
dtype: bfloat16
|
mradermacher/gemma-2-2b-it-GGUF
|
mradermacher
| 2024-11-15T21:58:12Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"conversational",
"en",
"base_model:google/gemma-2-2b-it",
"base_model:quantized:google/gemma-2-2b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T20:02:07Z |
---
base_model: google/gemma-2-2b-it
extra_gated_button_content: Acknowledge license
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/google/gemma-2-2b-it
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-2b-it-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-GGUF/resolve/main/gemma-2-2b-it.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AlexWortega/qwen23k
|
AlexWortega
| 2024-11-15T21:57:29Z | 5 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1077240",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-15T21:56:44Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1077240
- loss:MultipleNegativesRankingLoss
base_model: Qwen/Qwen2.5-0.5B-Instruct
widget:
- source_sentence: Who is the father of philosophy?
sentences:
- 'Charles Sanders Peirce
Charles Sanders Peirce (/pɜːrs/[9] "purse"; 10September 1839 – 19April 1914) was
an American philosopher, logician, mathematician, and scientist who is sometimes
known as "the father of pragmatism". He was educated as a chemist and employed
as a scientist for 30 years. Today he is appreciated largely for his contributions
to logic, mathematics, philosophy, scientific methodology, and semiotics, and
for his founding of pragmatism.'
- 'Georg Wilhelm Friedrich Hegel
According to Hegel, "Heraclitus is the one who first declared the nature of the
infinite and first grasped nature as in itself infinite, that is, its essence
as process. The origin of philosophy is to be dated from Heraclitus. His is the
persistent Idea that is the same in all philosophers up to the present day, as
it was the Idea of Plato and Aristotle". For Hegel, Heraclitus''s great achievements
were to have understood the nature of the infinite, which for Hegel includes understanding
the inherent contradictoriness and negativity of reality; and to have grasped
that reality is becoming or process and that "being" and "nothingness" are mere
empty abstractions. According to Hegel, Heraclitus''s "obscurity" comes from his
being a true (in Hegel''s terms "speculative") philosopher who grasped the ultimate
philosophical truth and therefore expressed himself in a way that goes beyond
the abstract and limited nature of common sense and is difficult to grasp by those
who operate within common sense. Hegel asserted that in Heraclitus he had an antecedent
for his logic: "[...] there is no proposition of Heraclitus which I have not adopted
in my logic".'
- 'History of nuclear weapons
The notion of using a fission weapon to ignite a process of nuclear fusion can
be dated back to 1942. At the first major theoretical conference on the development
of an atomic bomb hosted by J. Robert Oppenheimer at the University of California,
Berkeley, participant Edward Teller directed the majority of the discussion towards
Enrico Fermi''s idea of a "Super" bomb that would use the same reactions that
powered the Sun itself.'
- source_sentence: When was Father's Day first celebrated in America?
sentences:
- 'Father''s Day (United States)
Father''s Day was founded in Spokane, Washington at the YMCA in 1910 by Sonora
Smart Dodd, who was born in Arkansas.[4] Its first celebration was in the Spokane
YMCA on June 19, 1910.[4][5] Her father, the Civil War veteran William Jackson
Smart, was a single parent who raised his six children there.[4] After hearing
a sermon about Jarvis'' Mother''s Day at Central Methodist Episcopal Church in
1909, she told her pastor that fathers should have a similar holiday honoring
them.[4][6] Although she initially suggested June 5, her father''s birthday, the
pastors did not have enough time to prepare their sermons, and the celebration
was deferred to the third Sunday of June.[7][8]'
- 'Father''s Day
In [[Peru]], Father''s Day is celebrated on the third Sunday of June and is not
a public holiday. People usually give a present to their fathers and spend time
with him mostly during a family meal.'
- 'Sacramento River
The Sacramento and its wide natural floodplain were once abundant in fish and
other aquatic creatures, notably one of the southernmost large runs of chinook
salmon in North America. For about 12,000 years, humans have depended on the vast
natural resources of the watershed, which had one of the densest Native American
populations in California. The river has provided a route for trade and travel
since ancient times. Hundreds of tribes sharing regional customs and traditions
inhabited the Sacramento Valley, first coming into contact with European explorers
in the late 1700s. The Spanish explorer Gabriel Moraga named the river Rio de
los Sacramentos in 1808, later shortened and anglicized into Sacramento.'
- source_sentence: What is the population of Austria in 2018?
sentences:
- 'Utah State Capitol
The Utah State Capitol is the house of government for the U.S. state of Utah.
The building houses the chambers and offices of the Utah State Legislature, the
offices of the Governor, Lieutenant Governor, Attorney General, the State Auditor
and their staffs. The capitol is the main building of the Utah State Capitol Complex,
which is located on Capitol Hill, overlooking downtown Salt Lake City.'
- 'Same-sex marriage in Austria
A September 2018 poll for "Österreich" found that 74% of Austrians supported same-sex
marriage and 26% were against.'
- 'Demographics of Austria
Population 8,793,370 (July 2018 est.) country comparison to the world: 96th'
- source_sentence: What language family is Malay?
sentences:
- 'Malay language
Malay is a member of the Austronesian family of languages, which includes languages
from Southeast Asia and the Pacific Ocean, with a smaller number in continental
Asia. Malagasy, a geographic outlier spoken in Madagascar in the Indian Ocean,
is also a member of this language family. Although each language of the family
is mutually unintelligible, their similarities are rather striking. Many roots
have come virtually unchanged from their common ancestor, Proto-Austronesian language.
There are many cognates found in the languages'' words for kinship, health, body
parts and common animals. Numbers, especially, show remarkable similarities.'
- 'Filipinos of Malay descent
In the Philippines, there is misconception and often mixing between the two definitions.
Filipinos consider Malays as being the natives of the Philippines, Indonesia,
Malaysia and Brunei. Consequently, Filipinos consider themselves Malay when in
reality, they are referring to the Malay Race. Filipinos in Singapore also prefer
to be considered Malay, but their desire to be labeled as part of the ethnic group
was rejected by the Singaporean government. Paradoxically, a minor percentage
of Filipinos prefer the Spanish influence and may associate themselves with being
Hispanic, and have made no realistic attempts to promote and/or revive the Malay
language in the Philippines.'
- 'Preferred provider organization
In health insurance in the United States, a preferred provider organization (PPO),
sometimes referred to as a participating provider organization or preferred provider
option, is a managed care organization of medical doctors, hospitals, and other
health care providers who have agreed with an insurer or a third-party administrator
to provide health care at reduced rates to the insurer''s or administrator''s
clients.'
- source_sentence: When was ABC formed?
sentences:
- 'American Broadcasting Company
ABC launched as a radio network on October 12, 1943, serving as the successor
to the NBC Blue Network, which had been purchased by Edward J. Noble. It extended
its operations to television in 1948, following in the footsteps of established
broadcast networks CBS and NBC. In the mid-1950s, ABC merged with United Paramount
Theatres, a chain of movie theaters that formerly operated as a subsidiary of
Paramount Pictures. Leonard Goldenson, who had been the head of UPT, made the
new television network profitable by helping develop and greenlight many successful
series. In the 1980s, after purchasing an 80% interest in cable sports channel
ESPN, the network''s corporate parent, American Broadcasting Companies, Inc.,
merged with Capital Cities Communications, owner of several print publications,
and television and radio stations. In 1996, most of Capital Cities/ABC''s assets
were purchased by The Walt Disney Company.'
- 'Roman concrete
Roman concrete, also called opus caementicium, was a material used in construction
during the late Roman Republic until the fading of the Roman Empire. Roman concrete
was based on a hydraulic-setting cement. Recently, it has been found that it materially
differs in several ways from modern concrete which is based on Portland cement.
Roman concrete is durable due to its incorporation of volcanic ash, which prevents
cracks from spreading. By the middle of the 1st century, the material was used
frequently, often brick-faced, although variations in aggregate allowed different
arrangements of materials. Further innovative developments in the material, called
the Concrete Revolution, contributed to structurally complicated forms, such as
the Pantheon dome, the world''s largest and oldest unreinforced concrete dome.[1]'
- 'Americans Battling Communism
Americans Battling Communism, Inc. (ABC) was an anti-communist organization created
following an October 1947 speech by Pennsylvania Judge Blair Gunther that called
for an "ABC movement" to educate America about communism. Chartered in November
1947 by Harry Alan Sherman, a local lawyer active in various anti-communist organizations,
the group took part in such activities as blacklisting by disclosing the names
of people suspected of being communists. Its members included local judges and
lawyers active in the McCarthy-era prosecution of communists.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 896
type: sts-dev-896
metrics:
- type: pearson_cosine
value: 0.8199747689342192
name: Pearson Cosine
- type: spearman_cosine
value: 0.8176114370831747
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 768
type: sts-dev-768
metrics:
- type: pearson_cosine
value: 0.8177656539407367
name: Pearson Cosine
- type: spearman_cosine
value: 0.8154555109705525
name: Spearman Cosine
---
# SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) <!-- at revision 7ae557604adf67be50417f59c2c2f167def9a775 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 896 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 896, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AlexWortega/qwen23k")
# Run inference
sentences = [
'When was ABC formed?',
"American Broadcasting Company\nABC launched as a radio network on October 12, 1943, serving as the successor to the NBC Blue Network, which had been purchased by Edward J. Noble. It extended its operations to television in 1948, following in the footsteps of established broadcast networks CBS and NBC. In the mid-1950s, ABC merged with United Paramount Theatres, a chain of movie theaters that formerly operated as a subsidiary of Paramount Pictures. Leonard Goldenson, who had been the head of UPT, made the new television network profitable by helping develop and greenlight many successful series. In the 1980s, after purchasing an 80% interest in cable sports channel ESPN, the network's corporate parent, American Broadcasting Companies, Inc., merged with Capital Cities Communications, owner of several print publications, and television and radio stations. In 1996, most of Capital Cities/ABC's assets were purchased by The Walt Disney Company.",
'Americans Battling Communism\nAmericans Battling Communism, Inc. (ABC) was an anti-communist organization created following an October 1947 speech by Pennsylvania Judge Blair Gunther that called for an "ABC movement" to educate America about communism. Chartered in November 1947 by Harry Alan Sherman, a local lawyer active in various anti-communist organizations, the group took part in such activities as blacklisting by disclosing the names of people suspected of being communists. Its members included local judges and lawyers active in the McCarthy-era prosecution of communists.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 896]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev-896` and `sts-dev-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-dev-896 | sts-dev-768 |
|:--------------------|:------------|:------------|
| pearson_cosine | 0.82 | 0.8178 |
| **spearman_cosine** | **0.8176** | **0.8155** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,077,240 training samples
* Columns: <code>query</code>, <code>response</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | response | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 8.76 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 141.88 tokens</li><li>max: 532 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 134.02 tokens</li><li>max: 472 tokens</li></ul> |
* Samples:
| query | response | negative |
|:--------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Was there a year 0?</code> | <code>Year zero<br>Year zero does not exist in the anno Domini system usually used to number years in the Gregorian calendar and in its predecessor, the Julian calendar. In this system, the year 1 BC is followed by AD 1. However, there is a year zero in astronomical year numbering (where it coincides with the Julian year 1 BC) and in ISO 8601:2004 (where it coincides with the Gregorian year 1 BC) as well as in all Buddhist and Hindu calendars.</code> | <code>504<br>Year 504 (DIV) was a leap year starting on Thursday (link will display the full calendar) of the Julian calendar. At the time, it was known as the Year of the Consulship of Nicomachus without colleague (or, less frequently, year 1257 "Ab urbe condita"). The denomination 504 for this year has been used since the early medieval period, when the Anno Domini calendar era became the prevalent method in Europe for naming years.</code> |
| <code>When is the dialectical method used?</code> | <code>Dialectic<br>Dialectic or dialectics (Greek: διαλεκτική, dialektikḗ; related to dialogue), also known as the dialectical method, is at base a discourse between two or more people holding different points of view about a subject but wishing to establish the truth through reasoned arguments. Dialectic resembles debate, but the concept excludes subjective elements such as emotional appeal and the modern pejorative sense of rhetoric.[1][2] Dialectic may be contrasted with the didactic method, wherein one side of the conversation teaches the other. Dialectic is alternatively known as minor logic, as opposed to major logic or critique.</code> | <code>Derek Bentley case<br>Another factor in the posthumous defence was that a "confession" recorded by Bentley, which was claimed by the prosecution to be a "verbatim record of dictated monologue", was shown by forensic linguistics methods to have been largely edited by policemen. Linguist Malcolm Coulthard showed that certain patterns, such as the frequency of the word "then" and the grammatical use of "then" after the grammatical subject ("I then" rather than "then I"), were not consistent with Bentley's use of language (his idiolect), as evidenced in court testimony. These patterns fit better the recorded testimony of the policemen involved. This is one of the earliest uses of forensic linguistics on record.</code> |
| <code>What do Grasshoppers eat?</code> | <code>Grasshopper<br>Grasshoppers are plant-eaters, with a few species at times becoming serious pests of cereals, vegetables and pasture, especially when they swarm in their millions as locusts and destroy crops over wide areas. They protect themselves from predators by camouflage; when detected, many species attempt to startle the predator with a brilliantly-coloured wing-flash while jumping and (if adult) launching themselves into the air, usually flying for only a short distance. Other species such as the rainbow grasshopper have warning coloration which deters predators. Grasshoppers are affected by parasites and various diseases, and many predatory creatures feed on both nymphs and adults. The eggs are the subject of attack by parasitoids and predators.</code> | <code>Groundhog<br>Very often the dens of groundhogs provide homes for other animals including skunks, red foxes, and cottontail rabbits. The fox and skunk feed upon field mice, grasshoppers, beetles and other creatures that destroy farm crops. In aiding these animals, the groundhog indirectly helps the farmer. In addition to providing homes for itself and other animals, the groundhog aids in soil improvement by bringing subsoil to the surface. The groundhog is also a valuable game animal and is considered a difficult sport when hunted in a fair manner. In some parts of Appalachia, they are eaten.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `gradient_accumulation_steps`: 4
- `num_train_epochs`: 1
- `warmup_ratio`: 0.3
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.3
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | sts-dev-896_spearman_cosine | sts-dev-768_spearman_cosine |
|:------:|:-----:|:-------------:|:---------------------------:|:---------------------------:|
| 0.0004 | 10 | 2.2049 | - | - |
| 0.0009 | 20 | 2.3168 | - | - |
| 0.0013 | 30 | 2.3544 | - | - |
| 0.0018 | 40 | 2.2519 | - | - |
| 0.0022 | 50 | 2.1809 | - | - |
| 0.0027 | 60 | 2.1572 | - | - |
| 0.0031 | 70 | 2.1855 | - | - |
| 0.0036 | 80 | 2.5887 | - | - |
| 0.0040 | 90 | 2.883 | - | - |
| 0.0045 | 100 | 2.8557 | - | - |
| 0.0049 | 110 | 2.9356 | - | - |
| 0.0053 | 120 | 2.8833 | - | - |
| 0.0058 | 130 | 2.8394 | - | - |
| 0.0062 | 140 | 2.923 | - | - |
| 0.0067 | 150 | 2.8191 | - | - |
| 0.0071 | 160 | 2.8658 | - | - |
| 0.0076 | 170 | 2.8252 | - | - |
| 0.0080 | 180 | 2.8312 | - | - |
| 0.0085 | 190 | 2.7761 | - | - |
| 0.0089 | 200 | 2.7193 | - | - |
| 0.0094 | 210 | 2.724 | - | - |
| 0.0098 | 220 | 2.7484 | - | - |
| 0.0102 | 230 | 2.7262 | - | - |
| 0.0107 | 240 | 2.6964 | - | - |
| 0.0111 | 250 | 2.6676 | - | - |
| 0.0116 | 260 | 2.6715 | - | - |
| 0.0120 | 270 | 2.6145 | - | - |
| 0.0125 | 280 | 2.6191 | - | - |
| 0.0129 | 290 | 1.9812 | - | - |
| 0.0134 | 300 | 1.6413 | - | - |
| 0.0138 | 310 | 1.6126 | - | - |
| 0.0143 | 320 | 1.3599 | - | - |
| 0.0147 | 330 | 1.2996 | - | - |
| 0.0151 | 340 | 1.2654 | - | - |
| 0.0156 | 350 | 1.9409 | - | - |
| 0.0160 | 360 | 2.1287 | - | - |
| 0.0165 | 370 | 1.8442 | - | - |
| 0.0169 | 380 | 1.6837 | - | - |
| 0.0174 | 390 | 1.5489 | - | - |
| 0.0178 | 400 | 1.4382 | - | - |
| 0.0183 | 410 | 1.4848 | - | - |
| 0.0187 | 420 | 1.3481 | - | - |
| 0.0192 | 430 | 1.3467 | - | - |
| 0.0196 | 440 | 1.3977 | - | - |
| 0.0201 | 450 | 1.26 | - | - |
| 0.0205 | 460 | 1.2412 | - | - |
| 0.0209 | 470 | 1.316 | - | - |
| 0.0214 | 480 | 1.3501 | - | - |
| 0.0218 | 490 | 1.2246 | - | - |
| 0.0223 | 500 | 1.2271 | - | - |
| 0.0227 | 510 | 1.1871 | - | - |
| 0.0232 | 520 | 1.1685 | - | - |
| 0.0236 | 530 | 1.1624 | - | - |
| 0.0241 | 540 | 1.1911 | - | - |
| 0.0245 | 550 | 1.1978 | - | - |
| 0.0250 | 560 | 1.1228 | - | - |
| 0.0254 | 570 | 1.1091 | - | - |
| 0.0258 | 580 | 1.1433 | - | - |
| 0.0263 | 590 | 1.0638 | - | - |
| 0.0267 | 600 | 1.0515 | - | - |
| 0.0272 | 610 | 1.175 | - | - |
| 0.0276 | 620 | 1.0943 | - | - |
| 0.0281 | 630 | 1.1226 | - | - |
| 0.0285 | 640 | 0.9871 | - | - |
| 0.0290 | 650 | 1.0171 | - | - |
| 0.0294 | 660 | 1.0169 | - | - |
| 0.0299 | 670 | 0.9643 | - | - |
| 0.0303 | 680 | 0.9563 | - | - |
| 0.0307 | 690 | 0.9841 | - | - |
| 0.0312 | 700 | 1.0349 | - | - |
| 0.0316 | 710 | 0.8958 | - | - |
| 0.0321 | 720 | 0.9225 | - | - |
| 0.0325 | 730 | 0.842 | - | - |
| 0.0330 | 740 | 0.9104 | - | - |
| 0.0334 | 750 | 0.8927 | - | - |
| 0.0339 | 760 | 0.8508 | - | - |
| 0.0343 | 770 | 0.8835 | - | - |
| 0.0348 | 780 | 0.9531 | - | - |
| 0.0352 | 790 | 0.926 | - | - |
| 0.0356 | 800 | 0.8718 | - | - |
| 0.0361 | 810 | 0.8261 | - | - |
| 0.0365 | 820 | 0.8169 | - | - |
| 0.0370 | 830 | 0.8525 | - | - |
| 0.0374 | 840 | 0.8504 | - | - |
| 0.0379 | 850 | 0.7625 | - | - |
| 0.0383 | 860 | 0.8259 | - | - |
| 0.0388 | 870 | 0.7558 | - | - |
| 0.0392 | 880 | 0.7898 | - | - |
| 0.0397 | 890 | 0.7694 | - | - |
| 0.0401 | 900 | 0.7429 | - | - |
| 0.0405 | 910 | 0.6666 | - | - |
| 0.0410 | 920 | 0.7407 | - | - |
| 0.0414 | 930 | 0.6665 | - | - |
| 0.0419 | 940 | 0.7597 | - | - |
| 0.0423 | 950 | 0.7035 | - | - |
| 0.0428 | 960 | 0.7166 | - | - |
| 0.0432 | 970 | 0.6889 | - | - |
| 0.0437 | 980 | 0.7541 | - | - |
| 0.0441 | 990 | 0.7175 | - | - |
| 0.0446 | 1000 | 0.7389 | 0.6420 | 0.6403 |
| 0.0450 | 1010 | 0.7142 | - | - |
| 0.0454 | 1020 | 0.7301 | - | - |
| 0.0459 | 1030 | 0.7299 | - | - |
| 0.0463 | 1040 | 0.6759 | - | - |
| 0.0468 | 1050 | 0.7036 | - | - |
| 0.0472 | 1060 | 0.6286 | - | - |
| 0.0477 | 1070 | 0.595 | - | - |
| 0.0481 | 1080 | 0.6099 | - | - |
| 0.0486 | 1090 | 0.6377 | - | - |
| 0.0490 | 1100 | 0.6309 | - | - |
| 0.0495 | 1110 | 0.6306 | - | - |
| 0.0499 | 1120 | 0.557 | - | - |
| 0.0504 | 1130 | 0.5898 | - | - |
| 0.0508 | 1140 | 0.5896 | - | - |
| 0.0512 | 1150 | 0.6399 | - | - |
| 0.0517 | 1160 | 0.5923 | - | - |
| 0.0521 | 1170 | 0.5787 | - | - |
| 0.0526 | 1180 | 0.591 | - | - |
| 0.0530 | 1190 | 0.5714 | - | - |
| 0.0535 | 1200 | 0.6047 | - | - |
| 0.0539 | 1210 | 0.5904 | - | - |
| 0.0544 | 1220 | 0.543 | - | - |
| 0.0548 | 1230 | 0.6033 | - | - |
| 0.0553 | 1240 | 0.5445 | - | - |
| 0.0557 | 1250 | 0.5217 | - | - |
| 0.0561 | 1260 | 0.5835 | - | - |
| 0.0566 | 1270 | 0.5353 | - | - |
| 0.0570 | 1280 | 0.5887 | - | - |
| 0.0575 | 1290 | 0.5967 | - | - |
| 0.0579 | 1300 | 0.5036 | - | - |
| 0.0584 | 1310 | 0.5915 | - | - |
| 0.0588 | 1320 | 0.5719 | - | - |
| 0.0593 | 1330 | 0.5238 | - | - |
| 0.0597 | 1340 | 0.5647 | - | - |
| 0.0602 | 1350 | 0.538 | - | - |
| 0.0606 | 1360 | 0.5457 | - | - |
| 0.0610 | 1370 | 0.5169 | - | - |
| 0.0615 | 1380 | 0.4967 | - | - |
| 0.0619 | 1390 | 0.4864 | - | - |
| 0.0624 | 1400 | 0.5133 | - | - |
| 0.0628 | 1410 | 0.5587 | - | - |
| 0.0633 | 1420 | 0.4691 | - | - |
| 0.0637 | 1430 | 0.5186 | - | - |
| 0.0642 | 1440 | 0.4907 | - | - |
| 0.0646 | 1450 | 0.5281 | - | - |
| 0.0651 | 1460 | 0.4741 | - | - |
| 0.0655 | 1470 | 0.4452 | - | - |
| 0.0659 | 1480 | 0.4771 | - | - |
| 0.0664 | 1490 | 0.4289 | - | - |
| 0.0668 | 1500 | 0.4551 | - | - |
| 0.0673 | 1510 | 0.4558 | - | - |
| 0.0677 | 1520 | 0.5159 | - | - |
| 0.0682 | 1530 | 0.4296 | - | - |
| 0.0686 | 1540 | 0.4548 | - | - |
| 0.0691 | 1550 | 0.4439 | - | - |
| 0.0695 | 1560 | 0.4295 | - | - |
| 0.0700 | 1570 | 0.4466 | - | - |
| 0.0704 | 1580 | 0.4717 | - | - |
| 0.0708 | 1590 | 0.492 | - | - |
| 0.0713 | 1600 | 0.4566 | - | - |
| 0.0717 | 1610 | 0.4451 | - | - |
| 0.0722 | 1620 | 0.4715 | - | - |
| 0.0726 | 1630 | 0.4573 | - | - |
| 0.0731 | 1640 | 0.3972 | - | - |
| 0.0735 | 1650 | 0.5212 | - | - |
| 0.0740 | 1660 | 0.4381 | - | - |
| 0.0744 | 1670 | 0.4552 | - | - |
| 0.0749 | 1680 | 0.4767 | - | - |
| 0.0753 | 1690 | 0.4398 | - | - |
| 0.0757 | 1700 | 0.4801 | - | - |
| 0.0762 | 1710 | 0.3751 | - | - |
| 0.0766 | 1720 | 0.4407 | - | - |
| 0.0771 | 1730 | 0.4305 | - | - |
| 0.0775 | 1740 | 0.3938 | - | - |
| 0.0780 | 1750 | 0.4748 | - | - |
| 0.0784 | 1760 | 0.428 | - | - |
| 0.0789 | 1770 | 0.404 | - | - |
| 0.0793 | 1780 | 0.4261 | - | - |
| 0.0798 | 1790 | 0.359 | - | - |
| 0.0802 | 1800 | 0.4422 | - | - |
| 0.0807 | 1810 | 0.4748 | - | - |
| 0.0811 | 1820 | 0.4352 | - | - |
| 0.0815 | 1830 | 0.4032 | - | - |
| 0.0820 | 1840 | 0.4124 | - | - |
| 0.0824 | 1850 | 0.4486 | - | - |
| 0.0829 | 1860 | 0.429 | - | - |
| 0.0833 | 1870 | 0.4189 | - | - |
| 0.0838 | 1880 | 0.3658 | - | - |
| 0.0842 | 1890 | 0.4297 | - | - |
| 0.0847 | 1900 | 0.4215 | - | - |
| 0.0851 | 1910 | 0.3726 | - | - |
| 0.0856 | 1920 | 0.3736 | - | - |
| 0.0860 | 1930 | 0.4287 | - | - |
| 0.0864 | 1940 | 0.4402 | - | - |
| 0.0869 | 1950 | 0.4353 | - | - |
| 0.0873 | 1960 | 0.3622 | - | - |
| 0.0878 | 1970 | 0.3557 | - | - |
| 0.0882 | 1980 | 0.4107 | - | - |
| 0.0887 | 1990 | 0.3982 | - | - |
| 0.0891 | 2000 | 0.453 | 0.7292 | 0.7261 |
| 0.0896 | 2010 | 0.3971 | - | - |
| 0.0900 | 2020 | 0.4374 | - | - |
| 0.0905 | 2030 | 0.4322 | - | - |
| 0.0909 | 2040 | 0.3945 | - | - |
| 0.0913 | 2050 | 0.356 | - | - |
| 0.0918 | 2060 | 0.4182 | - | - |
| 0.0922 | 2070 | 0.3694 | - | - |
| 0.0927 | 2080 | 0.3989 | - | - |
| 0.0931 | 2090 | 0.4237 | - | - |
| 0.0936 | 2100 | 0.3961 | - | - |
| 0.0940 | 2110 | 0.4264 | - | - |
| 0.0945 | 2120 | 0.3609 | - | - |
| 0.0949 | 2130 | 0.4154 | - | - |
| 0.0954 | 2140 | 0.3661 | - | - |
| 0.0958 | 2150 | 0.3328 | - | - |
| 0.0962 | 2160 | 0.3456 | - | - |
| 0.0967 | 2170 | 0.3478 | - | - |
| 0.0971 | 2180 | 0.3339 | - | - |
| 0.0976 | 2190 | 0.3833 | - | - |
| 0.0980 | 2200 | 0.3238 | - | - |
| 0.0985 | 2210 | 0.3871 | - | - |
| 0.0989 | 2220 | 0.4009 | - | - |
| 0.0994 | 2230 | 0.4115 | - | - |
| 0.0998 | 2240 | 0.4024 | - | - |
| 0.1003 | 2250 | 0.35 | - | - |
| 0.1007 | 2260 | 0.3649 | - | - |
| 0.1011 | 2270 | 0.3615 | - | - |
| 0.1016 | 2280 | 0.3898 | - | - |
| 0.1020 | 2290 | 0.3866 | - | - |
| 0.1025 | 2300 | 0.3904 | - | - |
| 0.1029 | 2310 | 0.3321 | - | - |
| 0.1034 | 2320 | 0.3803 | - | - |
| 0.1038 | 2330 | 0.3831 | - | - |
| 0.1043 | 2340 | 0.403 | - | - |
| 0.1047 | 2350 | 0.3803 | - | - |
| 0.1052 | 2360 | 0.3463 | - | - |
| 0.1056 | 2370 | 0.3987 | - | - |
| 0.1060 | 2380 | 0.3731 | - | - |
| 0.1065 | 2390 | 0.353 | - | - |
| 0.1069 | 2400 | 0.3166 | - | - |
| 0.1074 | 2410 | 0.3895 | - | - |
| 0.1078 | 2420 | 0.4025 | - | - |
| 0.1083 | 2430 | 0.3798 | - | - |
| 0.1087 | 2440 | 0.2991 | - | - |
| 0.1092 | 2450 | 0.3094 | - | - |
| 0.1096 | 2460 | 0.3669 | - | - |
| 0.1101 | 2470 | 0.3412 | - | - |
| 0.1105 | 2480 | 0.3697 | - | - |
| 0.1110 | 2490 | 0.369 | - | - |
| 0.1114 | 2500 | 0.3393 | - | - |
| 0.1118 | 2510 | 0.4232 | - | - |
| 0.1123 | 2520 | 0.3445 | - | - |
| 0.1127 | 2530 | 0.4165 | - | - |
| 0.1132 | 2540 | 0.3721 | - | - |
| 0.1136 | 2550 | 0.3476 | - | - |
| 0.1141 | 2560 | 0.2847 | - | - |
| 0.1145 | 2570 | 0.3609 | - | - |
| 0.1150 | 2580 | 0.3017 | - | - |
| 0.1154 | 2590 | 0.374 | - | - |
| 0.1159 | 2600 | 0.3365 | - | - |
| 0.1163 | 2610 | 0.393 | - | - |
| 0.1167 | 2620 | 0.3623 | - | - |
| 0.1172 | 2630 | 0.3538 | - | - |
| 0.1176 | 2640 | 0.3206 | - | - |
| 0.1181 | 2650 | 0.3962 | - | - |
| 0.1185 | 2660 | 0.3087 | - | - |
| 0.1190 | 2670 | 0.3482 | - | - |
| 0.1194 | 2680 | 0.3616 | - | - |
| 0.1199 | 2690 | 0.3955 | - | - |
| 0.1203 | 2700 | 0.3915 | - | - |
| 0.1208 | 2710 | 0.3782 | - | - |
| 0.1212 | 2720 | 0.3576 | - | - |
| 0.1216 | 2730 | 0.3544 | - | - |
| 0.1221 | 2740 | 0.3572 | - | - |
| 0.1225 | 2750 | 0.3107 | - | - |
| 0.1230 | 2760 | 0.3579 | - | - |
| 0.1234 | 2770 | 0.3571 | - | - |
| 0.1239 | 2780 | 0.3694 | - | - |
| 0.1243 | 2790 | 0.3674 | - | - |
| 0.1248 | 2800 | 0.3373 | - | - |
| 0.1252 | 2810 | 0.3362 | - | - |
| 0.1257 | 2820 | 0.3225 | - | - |
| 0.1261 | 2830 | 0.3609 | - | - |
| 0.1265 | 2840 | 0.3681 | - | - |
| 0.1270 | 2850 | 0.4059 | - | - |
| 0.1274 | 2860 | 0.3047 | - | - |
| 0.1279 | 2870 | 0.3446 | - | - |
| 0.1283 | 2880 | 0.3507 | - | - |
| 0.1288 | 2890 | 0.3124 | - | - |
| 0.1292 | 2900 | 0.3712 | - | - |
| 0.1297 | 2910 | 0.3394 | - | - |
| 0.1301 | 2920 | 0.3869 | - | - |
| 0.1306 | 2930 | 0.3449 | - | - |
| 0.1310 | 2940 | 0.3752 | - | - |
| 0.1314 | 2950 | 0.3341 | - | - |
| 0.1319 | 2960 | 0.3329 | - | - |
| 0.1323 | 2970 | 0.36 | - | - |
| 0.1328 | 2980 | 0.3788 | - | - |
| 0.1332 | 2990 | 0.3834 | - | - |
| 0.1337 | 3000 | 0.3426 | 0.7603 | 0.7590 |
| 0.1341 | 3010 | 0.3591 | - | - |
| 0.1346 | 3020 | 0.2923 | - | - |
| 0.1350 | 3030 | 0.332 | - | - |
| 0.1355 | 3040 | 0.3867 | - | - |
| 0.1359 | 3050 | 0.3778 | - | - |
| 0.1363 | 3060 | 0.3389 | - | - |
| 0.1368 | 3070 | 0.3069 | - | - |
| 0.1372 | 3080 | 0.3833 | - | - |
| 0.1377 | 3090 | 0.3497 | - | - |
| 0.1381 | 3100 | 0.3698 | - | - |
| 0.1386 | 3110 | 0.335 | - | - |
| 0.1390 | 3120 | 0.3578 | - | - |
| 0.1395 | 3130 | 0.3171 | - | - |
| 0.1399 | 3140 | 0.3073 | - | - |
| 0.1404 | 3150 | 0.3354 | - | - |
| 0.1408 | 3160 | 0.3338 | - | - |
| 0.1412 | 3170 | 0.367 | - | - |
| 0.1417 | 3180 | 0.3299 | - | - |
| 0.1421 | 3190 | 0.3622 | - | - |
| 0.1426 | 3200 | 0.3158 | - | - |
| 0.1430 | 3210 | 0.3242 | - | - |
| 0.1435 | 3220 | 0.388 | - | - |
| 0.1439 | 3230 | 0.3626 | - | - |
| 0.1444 | 3240 | 0.3371 | - | - |
| 0.1448 | 3250 | 0.3808 | - | - |
| 0.1453 | 3260 | 0.3375 | - | - |
| 0.1457 | 3270 | 0.352 | - | - |
| 0.1462 | 3280 | 0.3466 | - | - |
| 0.1466 | 3290 | 0.3355 | - | - |
| 0.1470 | 3300 | 0.3432 | - | - |
| 0.1475 | 3310 | 0.372 | - | - |
| 0.1479 | 3320 | 0.3501 | - | - |
| 0.1484 | 3330 | 0.3311 | - | - |
| 0.1488 | 3340 | 0.3312 | - | - |
| 0.1493 | 3350 | 0.3276 | - | - |
| 0.1497 | 3360 | 0.3218 | - | - |
| 0.1502 | 3370 | 0.4019 | - | - |
| 0.1506 | 3380 | 0.3132 | - | - |
| 0.1511 | 3390 | 0.3741 | - | - |
| 0.1515 | 3400 | 0.3359 | - | - |
| 0.1519 | 3410 | 0.381 | - | - |
| 0.1524 | 3420 | 0.3024 | - | - |
| 0.1528 | 3430 | 0.3238 | - | - |
| 0.1533 | 3440 | 0.2675 | - | - |
| 0.1537 | 3450 | 0.3568 | - | - |
| 0.1542 | 3460 | 0.3666 | - | - |
| 0.1546 | 3470 | 0.3307 | - | - |
| 0.1551 | 3480 | 0.3698 | - | - |
| 0.1555 | 3490 | 0.3668 | - | - |
| 0.1560 | 3500 | 0.385 | - | - |
| 0.1564 | 3510 | 0.3068 | - | - |
| 0.1568 | 3520 | 0.3015 | - | - |
| 0.1573 | 3530 | 0.3604 | - | - |
| 0.1577 | 3540 | 0.3592 | - | - |
| 0.1582 | 3550 | 0.3483 | - | - |
| 0.1586 | 3560 | 0.3131 | - | - |
| 0.1591 | 3570 | 0.3738 | - | - |
| 0.1595 | 3580 | 0.3719 | - | - |
| 0.1600 | 3590 | 0.3409 | - | - |
| 0.1604 | 3600 | 0.4082 | - | - |
| 0.1609 | 3610 | 0.2881 | - | - |
| 0.1613 | 3620 | 0.3214 | - | - |
| 0.1617 | 3630 | 0.4413 | - | - |
| 0.1622 | 3640 | 0.3706 | - | - |
| 0.1626 | 3650 | 0.3643 | - | - |
| 0.1631 | 3660 | 0.3493 | - | - |
| 0.1635 | 3670 | 0.3877 | - | - |
| 0.1640 | 3680 | 0.3278 | - | - |
| 0.1644 | 3690 | 0.3211 | - | - |
| 0.1649 | 3700 | 0.4104 | - | - |
| 0.1653 | 3710 | 0.4558 | - | - |
| 0.1658 | 3720 | 0.3602 | - | - |
| 0.1662 | 3730 | 0.3348 | - | - |
| 0.1666 | 3740 | 0.2922 | - | - |
| 0.1671 | 3750 | 0.329 | - | - |
| 0.1675 | 3760 | 0.3507 | - | - |
| 0.1680 | 3770 | 0.2853 | - | - |
| 0.1684 | 3780 | 0.3556 | - | - |
| 0.1689 | 3790 | 0.3138 | - | - |
| 0.1693 | 3800 | 0.3536 | - | - |
| 0.1698 | 3810 | 0.3762 | - | - |
| 0.1702 | 3820 | 0.3262 | - | - |
| 0.1707 | 3830 | 0.3571 | - | - |
| 0.1711 | 3840 | 0.3455 | - | - |
| 0.1715 | 3850 | 0.3283 | - | - |
| 0.1720 | 3860 | 0.3317 | - | - |
| 0.1724 | 3870 | 0.2984 | - | - |
| 0.1729 | 3880 | 0.2659 | - | - |
| 0.1733 | 3890 | 0.2844 | - | - |
| 0.1738 | 3900 | 0.2999 | - | - |
| 0.1742 | 3910 | 0.2991 | - | - |
| 0.1747 | 3920 | 0.2667 | - | - |
| 0.1751 | 3930 | 0.3529 | - | - |
| 0.1756 | 3940 | 0.3767 | - | - |
| 0.1760 | 3950 | 0.3909 | - | - |
| 0.1765 | 3960 | 0.3393 | - | - |
| 0.1769 | 3970 | 0.2918 | - | - |
| 0.1773 | 3980 | 0.3363 | - | - |
| 0.1778 | 3990 | 0.3694 | - | - |
| 0.1782 | 4000 | 0.3 | 0.7572 | 0.7542 |
| 0.1787 | 4010 | 0.3266 | - | - |
| 0.1791 | 4020 | 0.3059 | - | - |
| 0.1796 | 4030 | 0.3038 | - | - |
| 0.1800 | 4040 | 0.3415 | - | - |
| 0.1805 | 4050 | 0.3385 | - | - |
| 0.1809 | 4060 | 0.3145 | - | - |
| 0.1814 | 4070 | 0.2816 | - | - |
| 0.1818 | 4080 | 0.3272 | - | - |
| 0.1822 | 4090 | 0.3335 | - | - |
| 0.1827 | 4100 | 0.3412 | - | - |
| 0.1831 | 4110 | 0.3367 | - | - |
| 0.1836 | 4120 | 0.2754 | - | - |
| 0.1840 | 4130 | 0.298 | - | - |
| 0.1845 | 4140 | 0.3252 | - | - |
| 0.1849 | 4150 | 0.3613 | - | - |
| 0.1854 | 4160 | 0.3197 | - | - |
| 0.1858 | 4170 | 0.3578 | - | - |
| 0.1863 | 4180 | 0.3254 | - | - |
| 0.1867 | 4190 | 0.2993 | - | - |
| 0.1871 | 4200 | 0.3188 | - | - |
| 0.1876 | 4210 | 0.3217 | - | - |
| 0.1880 | 4220 | 0.2893 | - | - |
| 0.1885 | 4230 | 0.3223 | - | - |
| 0.1889 | 4240 | 0.3522 | - | - |
| 0.1894 | 4250 | 0.3489 | - | - |
| 0.1898 | 4260 | 0.3313 | - | - |
| 0.1903 | 4270 | 0.3612 | - | - |
| 0.1907 | 4280 | 0.3323 | - | - |
| 0.1912 | 4290 | 0.2971 | - | - |
| 0.1916 | 4300 | 0.3009 | - | - |
| 0.1920 | 4310 | 0.3336 | - | - |
| 0.1925 | 4320 | 0.3655 | - | - |
| 0.1929 | 4330 | 0.3414 | - | - |
| 0.1934 | 4340 | 0.2903 | - | - |
| 0.1938 | 4350 | 0.3732 | - | - |
| 0.1943 | 4360 | 0.3526 | - | - |
| 0.1947 | 4370 | 0.3424 | - | - |
| 0.1952 | 4380 | 0.3371 | - | - |
| 0.1956 | 4390 | 0.3407 | - | - |
| 0.1961 | 4400 | 0.3626 | - | - |
| 0.1965 | 4410 | 0.3104 | - | - |
| 0.1969 | 4420 | 0.3432 | - | - |
| 0.1974 | 4430 | 0.2897 | - | - |
| 0.1978 | 4440 | 0.2952 | - | - |
| 0.1983 | 4450 | 0.3032 | - | - |
| 0.1987 | 4460 | 0.3179 | - | - |
| 0.1992 | 4470 | 0.3364 | - | - |
| 0.1996 | 4480 | 0.2757 | - | - |
| 0.2001 | 4490 | 0.3775 | - | - |
| 0.2005 | 4500 | 0.2782 | - | - |
| 0.2010 | 4510 | 0.2787 | - | - |
| 0.2014 | 4520 | 0.3433 | - | - |
| 0.2018 | 4530 | 0.3348 | - | - |
| 0.2023 | 4540 | 0.295 | - | - |
| 0.2027 | 4550 | 0.3076 | - | - |
| 0.2032 | 4560 | 0.3489 | - | - |
| 0.2036 | 4570 | 0.3741 | - | - |
| 0.2041 | 4580 | 0.3121 | - | - |
| 0.2045 | 4590 | 0.2682 | - | - |
| 0.2050 | 4600 | 0.3106 | - | - |
| 0.2054 | 4610 | 0.312 | - | - |
| 0.2059 | 4620 | 0.3537 | - | - |
| 0.2063 | 4630 | 0.2801 | - | - |
| 0.2068 | 4640 | 0.3378 | - | - |
| 0.2072 | 4650 | 0.3417 | - | - |
| 0.2076 | 4660 | 0.4114 | - | - |
| 0.2081 | 4670 | 0.3325 | - | - |
| 0.2085 | 4680 | 0.3085 | - | - |
| 0.2090 | 4690 | 0.2875 | - | - |
| 0.2094 | 4700 | 0.3864 | - | - |
| 0.2099 | 4710 | 0.3235 | - | - |
| 0.2103 | 4720 | 0.3187 | - | - |
| 0.2108 | 4730 | 0.2956 | - | - |
| 0.2112 | 4740 | 0.3405 | - | - |
| 0.2117 | 4750 | 0.313 | - | - |
| 0.2121 | 4760 | 0.2865 | - | - |
| 0.2125 | 4770 | 0.3555 | - | - |
| 0.2130 | 4780 | 0.3089 | - | - |
| 0.2134 | 4790 | 0.3021 | - | - |
| 0.2139 | 4800 | 0.353 | - | - |
| 0.2143 | 4810 | 0.3356 | - | - |
| 0.2148 | 4820 | 0.338 | - | - |
| 0.2152 | 4830 | 0.3362 | - | - |
| 0.2157 | 4840 | 0.3152 | - | - |
| 0.2161 | 4850 | 0.3321 | - | - |
| 0.2166 | 4860 | 0.3087 | - | - |
| 0.2170 | 4870 | 0.3503 | - | - |
| 0.2174 | 4880 | 0.3841 | - | - |
| 0.2179 | 4890 | 0.333 | - | - |
| 0.2183 | 4900 | 0.3705 | - | - |
| 0.2188 | 4910 | 0.3121 | - | - |
| 0.2192 | 4920 | 0.3151 | - | - |
| 0.2197 | 4930 | 0.3138 | - | - |
| 0.2201 | 4940 | 0.3525 | - | - |
| 0.2206 | 4950 | 0.3233 | - | - |
| 0.2210 | 4960 | 0.2762 | - | - |
| 0.2215 | 4970 | 0.3679 | - | - |
| 0.2219 | 4980 | 0.3351 | - | - |
| 0.2223 | 4990 | 0.3733 | - | - |
| 0.2228 | 5000 | 0.366 | 0.7601 | 0.7577 |
| 0.2232 | 5010 | 0.2968 | - | - |
| 0.2237 | 5020 | 0.3618 | - | - |
| 0.2241 | 5030 | 0.3758 | - | - |
| 0.2246 | 5040 | 0.2664 | - | - |
| 0.2250 | 5050 | 0.3232 | - | - |
| 0.2255 | 5060 | 0.3452 | - | - |
| 0.2259 | 5070 | 0.4011 | - | - |
| 0.2264 | 5080 | 0.3521 | - | - |
| 0.2268 | 5090 | 0.3029 | - | - |
| 0.2272 | 5100 | 0.3058 | - | - |
| 0.2277 | 5110 | 0.3198 | - | - |
| 0.2281 | 5120 | 0.2958 | - | - |
| 0.2286 | 5130 | 0.3046 | - | - |
| 0.2290 | 5140 | 0.3284 | - | - |
| 0.2295 | 5150 | 0.333 | - | - |
| 0.2299 | 5160 | 0.3385 | - | - |
| 0.2304 | 5170 | 0.3359 | - | - |
| 0.2308 | 5180 | 0.3572 | - | - |
| 0.2313 | 5190 | 0.2992 | - | - |
| 0.2317 | 5200 | 0.318 | - | - |
| 0.2321 | 5210 | 0.3002 | - | - |
| 0.2326 | 5220 | 0.3194 | - | - |
| 0.2330 | 5230 | 0.3398 | - | - |
| 0.2335 | 5240 | 0.2675 | - | - |
| 0.2339 | 5250 | 0.312 | - | - |
| 0.2344 | 5260 | 0.3199 | - | - |
| 0.2348 | 5270 | 0.3446 | - | - |
| 0.2353 | 5280 | 0.3082 | - | - |
| 0.2357 | 5290 | 0.3522 | - | - |
| 0.2362 | 5300 | 0.3347 | - | - |
| 0.2366 | 5310 | 0.3571 | - | - |
| 0.2371 | 5320 | 0.3275 | - | - |
| 0.2375 | 5330 | 0.3524 | - | - |
| 0.2379 | 5340 | 0.3151 | - | - |
| 0.2384 | 5350 | 0.3338 | - | - |
| 0.2388 | 5360 | 0.3794 | - | - |
| 0.2393 | 5370 | 0.3591 | - | - |
| 0.2397 | 5380 | 0.3442 | - | - |
| 0.2402 | 5390 | 0.2927 | - | - |
| 0.2406 | 5400 | 0.3316 | - | - |
| 0.2411 | 5410 | 0.3152 | - | - |
| 0.2415 | 5420 | 0.3876 | - | - |
| 0.2420 | 5430 | 0.324 | - | - |
| 0.2424 | 5440 | 0.3296 | - | - |
| 0.2428 | 5450 | 0.3499 | - | - |
| 0.2433 | 5460 | 0.3552 | - | - |
| 0.2437 | 5470 | 0.3394 | - | - |
| 0.2442 | 5480 | 0.3083 | - | - |
| 0.2446 | 5490 | 0.3198 | - | - |
| 0.2451 | 5500 | 0.2887 | - | - |
| 0.2455 | 5510 | 0.2898 | - | - |
| 0.2460 | 5520 | 0.3092 | - | - |
| 0.2464 | 5530 | 0.3025 | - | - |
| 0.2469 | 5540 | 0.3253 | - | - |
| 0.2473 | 5550 | 0.3686 | - | - |
| 0.2477 | 5560 | 0.3205 | - | - |
| 0.2482 | 5570 | 0.3507 | - | - |
| 0.2486 | 5580 | 0.2809 | - | - |
| 0.2491 | 5590 | 0.3339 | - | - |
| 0.2495 | 5600 | 0.3261 | - | - |
| 0.2500 | 5610 | 0.2804 | - | - |
| 0.2504 | 5620 | 0.2856 | - | - |
| 0.2509 | 5630 | 0.3211 | - | - |
| 0.2513 | 5640 | 0.3126 | - | - |
| 0.2518 | 5650 | 0.3374 | - | - |
| 0.2522 | 5660 | 0.2957 | - | - |
| 0.2526 | 5670 | 0.3414 | - | - |
| 0.2531 | 5680 | 0.3219 | - | - |
| 0.2535 | 5690 | 0.3554 | - | - |
| 0.2540 | 5700 | 0.2738 | - | - |
| 0.2544 | 5710 | 0.361 | - | - |
| 0.2549 | 5720 | 0.336 | - | - |
| 0.2553 | 5730 | 0.3254 | - | - |
| 0.2558 | 5740 | 0.3453 | - | - |
| 0.2562 | 5750 | 0.2984 | - | - |
| 0.2567 | 5760 | 0.3224 | - | - |
| 0.2571 | 5770 | 0.2553 | - | - |
| 0.2575 | 5780 | 0.301 | - | - |
| 0.2580 | 5790 | 0.3767 | - | - |
| 0.2584 | 5800 | 0.3092 | - | - |
| 0.2589 | 5810 | 0.2676 | - | - |
| 0.2593 | 5820 | 0.3178 | - | - |
| 0.2598 | 5830 | 0.3117 | - | - |
| 0.2602 | 5840 | 0.3446 | - | - |
| 0.2607 | 5850 | 0.3347 | - | - |
| 0.2611 | 5860 | 0.3841 | - | - |
| 0.2616 | 5870 | 0.2847 | - | - |
| 0.2620 | 5880 | 0.3587 | - | - |
| 0.2624 | 5890 | 0.2812 | - | - |
| 0.2629 | 5900 | 0.3577 | - | - |
| 0.2633 | 5910 | 0.3011 | - | - |
| 0.2638 | 5920 | 0.3102 | - | - |
| 0.2642 | 5930 | 0.3297 | - | - |
| 0.2647 | 5940 | 0.2603 | - | - |
| 0.2651 | 5950 | 0.3575 | - | - |
| 0.2656 | 5960 | 0.3617 | - | - |
| 0.2660 | 5970 | 0.3587 | - | - |
| 0.2665 | 5980 | 0.3198 | - | - |
| 0.2669 | 5990 | 0.3536 | - | - |
| 0.2673 | 6000 | 0.3047 | 0.7725 | 0.7699 |
| 0.2678 | 6010 | 0.3211 | - | - |
| 0.2682 | 6020 | 0.392 | - | - |
| 0.2687 | 6030 | 0.3359 | - | - |
| 0.2691 | 6040 | 0.2903 | - | - |
| 0.2696 | 6050 | 0.286 | - | - |
| 0.2700 | 6060 | 0.3426 | - | - |
| 0.2705 | 6070 | 0.3406 | - | - |
| 0.2709 | 6080 | 0.2903 | - | - |
| 0.2714 | 6090 | 0.3175 | - | - |
| 0.2718 | 6100 | 0.2794 | - | - |
| 0.2723 | 6110 | 0.3232 | - | - |
| 0.2727 | 6120 | 0.3054 | - | - |
| 0.2731 | 6130 | 0.361 | - | - |
| 0.2736 | 6140 | 0.3524 | - | - |
| 0.2740 | 6150 | 0.3371 | - | - |
| 0.2745 | 6160 | 0.313 | - | - |
| 0.2749 | 6170 | 0.2713 | - | - |
| 0.2754 | 6180 | 0.3141 | - | - |
| 0.2758 | 6190 | 0.3197 | - | - |
| 0.2763 | 6200 | 0.2792 | - | - |
| 0.2767 | 6210 | 0.3169 | - | - |
| 0.2772 | 6220 | 0.307 | - | - |
| 0.2776 | 6230 | 0.2737 | - | - |
| 0.2780 | 6240 | 0.3348 | - | - |
| 0.2785 | 6250 | 0.2885 | - | - |
| 0.2789 | 6260 | 0.3416 | - | - |
| 0.2794 | 6270 | 0.3422 | - | - |
| 0.2798 | 6280 | 0.2758 | - | - |
| 0.2803 | 6290 | 0.3736 | - | - |
| 0.2807 | 6300 | 0.3036 | - | - |
| 0.2812 | 6310 | 0.3704 | - | - |
| 0.2816 | 6320 | 0.3312 | - | - |
| 0.2821 | 6330 | 0.3431 | - | - |
| 0.2825 | 6340 | 0.3502 | - | - |
| 0.2829 | 6350 | 0.2821 | - | - |
| 0.2834 | 6360 | 0.3097 | - | - |
| 0.2838 | 6370 | 0.3444 | - | - |
| 0.2843 | 6380 | 0.3349 | - | - |
| 0.2847 | 6390 | 0.2999 | - | - |
| 0.2852 | 6400 | 0.3149 | - | - |
| 0.2856 | 6410 | 0.3462 | - | - |
| 0.2861 | 6420 | 0.3337 | - | - |
| 0.2865 | 6430 | 0.3329 | - | - |
| 0.2870 | 6440 | 0.3294 | - | - |
| 0.2874 | 6450 | 0.2917 | - | - |
| 0.2878 | 6460 | 0.3007 | - | - |
| 0.2883 | 6470 | 0.2809 | - | - |
| 0.2887 | 6480 | 0.3745 | - | - |
| 0.2892 | 6490 | 0.3625 | - | - |
| 0.2896 | 6500 | 0.3123 | - | - |
| 0.2901 | 6510 | 0.3209 | - | - |
| 0.2905 | 6520 | 0.347 | - | - |
| 0.2910 | 6530 | 0.3084 | - | - |
| 0.2914 | 6540 | 0.2829 | - | - |
| 0.2919 | 6550 | 0.3569 | - | - |
| 0.2923 | 6560 | 0.2686 | - | - |
| 0.2927 | 6570 | 0.2929 | - | - |
| 0.2932 | 6580 | 0.3237 | - | - |
| 0.2936 | 6590 | 0.3451 | - | - |
| 0.2941 | 6600 | 0.3199 | - | - |
| 0.2945 | 6610 | 0.2848 | - | - |
| 0.2950 | 6620 | 0.2842 | - | - |
| 0.2954 | 6630 | 0.3168 | - | - |
| 0.2959 | 6640 | 0.3094 | - | - |
| 0.2963 | 6650 | 0.3239 | - | - |
| 0.2968 | 6660 | 0.357 | - | - |
| 0.2972 | 6670 | 0.3279 | - | - |
| 0.2976 | 6680 | 0.4015 | - | - |
| 0.2981 | 6690 | 0.2901 | - | - |
| 0.2985 | 6700 | 0.3387 | - | - |
| 0.2990 | 6710 | 0.3282 | - | - |
| 0.2994 | 6720 | 0.2909 | - | - |
| 0.2999 | 6730 | 0.3556 | - | - |
| 0.3003 | 6740 | 0.3008 | - | - |
| 0.3008 | 6750 | 0.3205 | - | - |
| 0.3012 | 6760 | 0.3132 | - | - |
| 0.3017 | 6770 | 0.3181 | - | - |
| 0.3021 | 6780 | 0.3752 | - | - |
| 0.3026 | 6790 | 0.317 | - | - |
| 0.3030 | 6800 | 0.3584 | - | - |
| 0.3034 | 6810 | 0.3475 | - | - |
| 0.3039 | 6820 | 0.2827 | - | - |
| 0.3043 | 6830 | 0.2925 | - | - |
| 0.3048 | 6840 | 0.2941 | - | - |
| 0.3052 | 6850 | 0.3154 | - | - |
| 0.3057 | 6860 | 0.3301 | - | - |
| 0.3061 | 6870 | 0.3492 | - | - |
| 0.3066 | 6880 | 0.3147 | - | - |
| 0.3070 | 6890 | 0.348 | - | - |
| 0.3075 | 6900 | 0.3577 | - | - |
| 0.3079 | 6910 | 0.2893 | - | - |
| 0.3083 | 6920 | 0.3298 | - | - |
| 0.3088 | 6930 | 0.3071 | - | - |
| 0.3092 | 6940 | 0.322 | - | - |
| 0.3097 | 6950 | 0.3055 | - | - |
| 0.3101 | 6960 | 0.3333 | - | - |
| 0.3106 | 6970 | 0.3329 | - | - |
| 0.3110 | 6980 | 0.3298 | - | - |
| 0.3115 | 6990 | 0.3061 | - | - |
| 0.3119 | 7000 | 0.3005 | 0.7686 | 0.7672 |
| 0.3124 | 7010 | 0.3463 | - | - |
| 0.3128 | 7020 | 0.3467 | - | - |
| 0.3132 | 7030 | 0.3104 | - | - |
| 0.3137 | 7040 | 0.3268 | - | - |
| 0.3141 | 7050 | 0.3222 | - | - |
| 0.3146 | 7060 | 0.3126 | - | - |
| 0.3150 | 7070 | 0.3121 | - | - |
| 0.3155 | 7080 | 0.2935 | - | - |
| 0.3159 | 7090 | 0.2897 | - | - |
| 0.3164 | 7100 | 0.3066 | - | - |
| 0.3168 | 7110 | 0.3363 | - | - |
| 0.3173 | 7120 | 0.3293 | - | - |
| 0.3177 | 7130 | 0.3161 | - | - |
| 0.3181 | 7140 | 0.3582 | - | - |
| 0.3186 | 7150 | 0.3345 | - | - |
| 0.3190 | 7160 | 0.3307 | - | - |
| 0.3195 | 7170 | 0.3269 | - | - |
| 0.3199 | 7180 | 0.3262 | - | - |
| 0.3204 | 7190 | 0.3115 | - | - |
| 0.3208 | 7200 | 0.3145 | - | - |
| 0.3213 | 7210 | 0.2816 | - | - |
| 0.3217 | 7220 | 0.3239 | - | - |
| 0.3222 | 7230 | 0.2825 | - | - |
| 0.3226 | 7240 | 0.3217 | - | - |
| 0.3230 | 7250 | 0.2913 | - | - |
| 0.3235 | 7260 | 0.3219 | - | - |
| 0.3239 | 7270 | 0.2968 | - | - |
| 0.3244 | 7280 | 0.2999 | - | - |
| 0.3248 | 7290 | 0.2924 | - | - |
| 0.3253 | 7300 | 0.3033 | - | - |
| 0.3257 | 7310 | 0.3521 | - | - |
| 0.3262 | 7320 | 0.3258 | - | - |
| 0.3266 | 7330 | 0.3724 | - | - |
| 0.3271 | 7340 | 0.3068 | - | - |
| 0.3275 | 7350 | 0.3095 | - | - |
| 0.3279 | 7360 | 0.2957 | - | - |
| 0.3284 | 7370 | 0.2741 | - | - |
| 0.3288 | 7380 | 0.3183 | - | - |
| 0.3293 | 7390 | 0.3409 | - | - |
| 0.3297 | 7400 | 0.3066 | - | - |
| 0.3302 | 7410 | 0.3139 | - | - |
| 0.3306 | 7420 | 0.3639 | - | - |
| 0.3311 | 7430 | 0.3333 | - | - |
| 0.3315 | 7440 | 0.276 | - | - |
| 0.3320 | 7450 | 0.3326 | - | - |
| 0.3324 | 7460 | 0.3239 | - | - |
| 0.3329 | 7470 | 0.3067 | - | - |
| 0.3333 | 7480 | 0.3213 | - | - |
| 0.3337 | 7490 | 0.3227 | - | - |
| 0.3342 | 7500 | 0.3027 | - | - |
| 0.3346 | 7510 | 0.3017 | - | - |
| 0.3351 | 7520 | 0.2797 | - | - |
| 0.3355 | 7530 | 0.3215 | - | - |
| 0.3360 | 7540 | 0.2713 | - | - |
| 0.3364 | 7550 | 0.3071 | - | - |
| 0.3369 | 7560 | 0.309 | - | - |
| 0.3373 | 7570 | 0.3145 | - | - |
| 0.3378 | 7580 | 0.2694 | - | - |
| 0.3382 | 7590 | 0.3036 | - | - |
| 0.3386 | 7600 | 0.2892 | - | - |
| 0.3391 | 7610 | 0.3227 | - | - |
| 0.3395 | 7620 | 0.3373 | - | - |
| 0.3400 | 7630 | 0.2584 | - | - |
| 0.3404 | 7640 | 0.232 | - | - |
| 0.3409 | 7650 | 0.311 | - | - |
| 0.3413 | 7660 | 0.3536 | - | - |
| 0.3418 | 7670 | 0.3279 | - | - |
| 0.3422 | 7680 | 0.3034 | - | - |
| 0.3427 | 7690 | 0.2916 | - | - |
| 0.3431 | 7700 | 0.2822 | - | - |
| 0.3435 | 7710 | 0.2871 | - | - |
| 0.3440 | 7720 | 0.3284 | - | - |
| 0.3444 | 7730 | 0.2909 | - | - |
| 0.3449 | 7740 | 0.3292 | - | - |
| 0.3453 | 7750 | 0.3393 | - | - |
| 0.3458 | 7760 | 0.2838 | - | - |
| 0.3462 | 7770 | 0.2686 | - | - |
| 0.3467 | 7780 | 0.318 | - | - |
| 0.3471 | 7790 | 0.3335 | - | - |
| 0.3476 | 7800 | 0.3017 | - | - |
| 0.3480 | 7810 | 0.2595 | - | - |
| 0.3484 | 7820 | 0.3008 | - | - |
| 0.3489 | 7830 | 0.2726 | - | - |
| 0.3493 | 7840 | 0.2938 | - | - |
| 0.3498 | 7850 | 0.2923 | - | - |
| 0.3502 | 7860 | 0.361 | - | - |
| 0.3507 | 7870 | 0.2689 | - | - |
| 0.3511 | 7880 | 0.3014 | - | - |
| 0.3516 | 7890 | 0.3169 | - | - |
| 0.3520 | 7900 | 0.3124 | - | - |
| 0.3525 | 7910 | 0.3367 | - | - |
| 0.3529 | 7920 | 0.276 | - | - |
| 0.3533 | 7930 | 0.3556 | - | - |
| 0.3538 | 7940 | 0.3036 | - | - |
| 0.3542 | 7950 | 0.2983 | - | - |
| 0.3547 | 7960 | 0.3393 | - | - |
| 0.3551 | 7970 | 0.3688 | - | - |
| 0.3556 | 7980 | 0.3391 | - | - |
| 0.3560 | 7990 | 0.3432 | - | - |
| 0.3565 | 8000 | 0.3061 | 0.7543 | 0.7526 |
| 0.3569 | 8010 | 0.293 | - | - |
| 0.3574 | 8020 | 0.2925 | - | - |
| 0.3578 | 8030 | 0.2852 | - | - |
| 0.3582 | 8040 | 0.396 | - | - |
| 0.3587 | 8050 | 0.2927 | - | - |
| 0.3591 | 8060 | 0.3028 | - | - |
| 0.3596 | 8070 | 0.3102 | - | - |
| 0.3600 | 8080 | 0.328 | - | - |
| 0.3605 | 8090 | 0.3194 | - | - |
| 0.3609 | 8100 | 0.2808 | - | - |
| 0.3614 | 8110 | 0.292 | - | - |
| 0.3618 | 8120 | 0.3232 | - | - |
| 0.3623 | 8130 | 0.3629 | - | - |
| 0.3627 | 8140 | 0.3222 | - | - |
| 0.3632 | 8150 | 0.3691 | - | - |
| 0.3636 | 8160 | 0.2965 | - | - |
| 0.3640 | 8170 | 0.293 | - | - |
| 0.3645 | 8180 | 0.3166 | - | - |
| 0.3649 | 8190 | 0.3021 | - | - |
| 0.3654 | 8200 | 0.2815 | - | - |
| 0.3658 | 8210 | 0.3089 | - | - |
| 0.3663 | 8220 | 0.2804 | - | - |
| 0.3667 | 8230 | 0.3011 | - | - |
| 0.3672 | 8240 | 0.27 | - | - |
| 0.3676 | 8250 | 0.361 | - | - |
| 0.3681 | 8260 | 0.3322 | - | - |
| 0.3685 | 8270 | 0.2741 | - | - |
| 0.3689 | 8280 | 0.3207 | - | - |
| 0.3694 | 8290 | 0.3437 | - | - |
| 0.3698 | 8300 | 0.3259 | - | - |
| 0.3703 | 8310 | 0.2473 | - | - |
| 0.3707 | 8320 | 0.2321 | - | - |
| 0.3712 | 8330 | 0.2699 | - | - |
| 0.3716 | 8340 | 0.2404 | - | - |
| 0.3721 | 8350 | 0.2586 | - | - |
| 0.3725 | 8360 | 0.295 | - | - |
| 0.3730 | 8370 | 0.3063 | - | - |
| 0.3734 | 8380 | 0.2551 | - | - |
| 0.3738 | 8390 | 0.2562 | - | - |
| 0.3743 | 8400 | 0.3062 | - | - |
| 0.3747 | 8410 | 0.3165 | - | - |
| 0.3752 | 8420 | 0.308 | - | - |
| 0.3756 | 8430 | 0.2976 | - | - |
| 0.3761 | 8440 | 0.284 | - | - |
| 0.3765 | 8450 | 0.3525 | - | - |
| 0.3770 | 8460 | 0.2639 | - | - |
| 0.3774 | 8470 | 0.3171 | - | - |
| 0.3779 | 8480 | 0.3367 | - | - |
| 0.3783 | 8490 | 0.2801 | - | - |
| 0.3787 | 8500 | 0.2957 | - | - |
| 0.3792 | 8510 | 0.3684 | - | - |
| 0.3796 | 8520 | 0.312 | - | - |
| 0.3801 | 8530 | 0.3703 | - | - |
| 0.3805 | 8540 | 0.2963 | - | - |
| 0.3810 | 8550 | 0.3032 | - | - |
| 0.3814 | 8560 | 0.3415 | - | - |
| 0.3819 | 8570 | 0.3011 | - | - |
| 0.3823 | 8580 | 0.33 | - | - |
| 0.3828 | 8590 | 0.2763 | - | - |
| 0.3832 | 8600 | 0.3295 | - | - |
| 0.3836 | 8610 | 0.3334 | - | - |
| 0.3841 | 8620 | 0.258 | - | - |
| 0.3845 | 8630 | 0.2626 | - | - |
| 0.3850 | 8640 | 0.2813 | - | - |
| 0.3854 | 8650 | 0.2845 | - | - |
| 0.3859 | 8660 | 0.2719 | - | - |
| 0.3863 | 8670 | 0.2898 | - | - |
| 0.3868 | 8680 | 0.3011 | - | - |
| 0.3872 | 8690 | 0.2914 | - | - |
| 0.3877 | 8700 | 0.3355 | - | - |
| 0.3881 | 8710 | 0.2678 | - | - |
| 0.3885 | 8720 | 0.2266 | - | - |
| 0.3890 | 8730 | 0.3016 | - | - |
| 0.3894 | 8740 | 0.3369 | - | - |
| 0.3899 | 8750 | 0.3558 | - | - |
| 0.3903 | 8760 | 0.2824 | - | - |
| 0.3908 | 8770 | 0.3201 | - | - |
| 0.3912 | 8780 | 0.2485 | - | - |
| 0.3917 | 8790 | 0.2603 | - | - |
| 0.3921 | 8800 | 0.3223 | - | - |
| 0.3926 | 8810 | 0.247 | - | - |
| 0.3930 | 8820 | 0.2766 | - | - |
| 0.3934 | 8830 | 0.3231 | - | - |
| 0.3939 | 8840 | 0.322 | - | - |
| 0.3943 | 8850 | 0.3039 | - | - |
| 0.3948 | 8860 | 0.2442 | - | - |
| 0.3952 | 8870 | 0.36 | - | - |
| 0.3957 | 8880 | 0.2551 | - | - |
| 0.3961 | 8890 | 0.2661 | - | - |
| 0.3966 | 8900 | 0.3001 | - | - |
| 0.3970 | 8910 | 0.2886 | - | - |
| 0.3975 | 8920 | 0.2856 | - | - |
| 0.3979 | 8930 | 0.2827 | - | - |
| 0.3984 | 8940 | 0.2652 | - | - |
| 0.3988 | 8950 | 0.3077 | - | - |
| 0.3992 | 8960 | 0.3094 | - | - |
| 0.3997 | 8970 | 0.3281 | - | - |
| 0.4001 | 8980 | 0.3399 | - | - |
| 0.4006 | 8990 | 0.3093 | - | - |
| 0.4010 | 9000 | 0.2586 | 0.7634 | 0.7607 |
| 0.4015 | 9010 | 0.2939 | - | - |
| 0.4019 | 9020 | 0.3022 | - | - |
| 0.4024 | 9030 | 0.2919 | - | - |
| 0.4028 | 9040 | 0.2524 | - | - |
| 0.4033 | 9050 | 0.2248 | - | - |
| 0.4037 | 9060 | 0.2759 | - | - |
| 0.4041 | 9070 | 0.2916 | - | - |
| 0.4046 | 9080 | 0.3006 | - | - |
| 0.4050 | 9090 | 0.2302 | - | - |
| 0.4055 | 9100 | 0.3001 | - | - |
| 0.4059 | 9110 | 0.3143 | - | - |
| 0.4064 | 9120 | 0.2544 | - | - |
| 0.4068 | 9130 | 0.3142 | - | - |
| 0.4073 | 9140 | 0.3364 | - | - |
| 0.4077 | 9150 | 0.2785 | - | - |
| 0.4082 | 9160 | 0.2948 | - | - |
| 0.4086 | 9170 | 0.2657 | - | - |
| 0.4090 | 9180 | 0.2722 | - | - |
| 0.4095 | 9190 | 0.3212 | - | - |
| 0.4099 | 9200 | 0.2952 | - | - |
| 0.4104 | 9210 | 0.2764 | - | - |
| 0.4108 | 9220 | 0.2744 | - | - |
| 0.4113 | 9230 | 0.2912 | - | - |
| 0.4117 | 9240 | 0.2676 | - | - |
| 0.4122 | 9250 | 0.2613 | - | - |
| 0.4126 | 9260 | 0.2905 | - | - |
| 0.4131 | 9270 | 0.3308 | - | - |
| 0.4135 | 9280 | 0.3311 | - | - |
| 0.4139 | 9290 | 0.2904 | - | - |
| 0.4144 | 9300 | 0.3367 | - | - |
| 0.4148 | 9310 | 0.2742 | - | - |
| 0.4153 | 9320 | 0.295 | - | - |
| 0.4157 | 9330 | 0.3034 | - | - |
| 0.4162 | 9340 | 0.3302 | - | - |
| 0.4166 | 9350 | 0.2883 | - | - |
| 0.4171 | 9360 | 0.2768 | - | - |
| 0.4175 | 9370 | 0.2953 | - | - |
| 0.4180 | 9380 | 0.3196 | - | - |
| 0.4184 | 9390 | 0.2731 | - | - |
| 0.4188 | 9400 | 0.3016 | - | - |
| 0.4193 | 9410 | 0.3325 | - | - |
| 0.4197 | 9420 | 0.2503 | - | - |
| 0.4202 | 9430 | 0.273 | - | - |
| 0.4206 | 9440 | 0.2784 | - | - |
| 0.4211 | 9450 | 0.2676 | - | - |
| 0.4215 | 9460 | 0.2891 | - | - |
| 0.4220 | 9470 | 0.2977 | - | - |
| 0.4224 | 9480 | 0.2673 | - | - |
| 0.4229 | 9490 | 0.2845 | - | - |
| 0.4233 | 9500 | 0.2825 | - | - |
| 0.4237 | 9510 | 0.2865 | - | - |
| 0.4242 | 9520 | 0.2451 | - | - |
| 0.4246 | 9530 | 0.2806 | - | - |
| 0.4251 | 9540 | 0.2629 | - | - |
| 0.4255 | 9550 | 0.3426 | - | - |
| 0.4260 | 9560 | 0.2453 | - | - |
| 0.4264 | 9570 | 0.3458 | - | - |
| 0.4269 | 9580 | 0.2392 | - | - |
| 0.4273 | 9590 | 0.2433 | - | - |
| 0.4278 | 9600 | 0.2481 | - | - |
| 0.4282 | 9610 | 0.3277 | - | - |
| 0.4287 | 9620 | 0.2609 | - | - |
| 0.4291 | 9630 | 0.2986 | - | - |
| 0.4295 | 9640 | 0.2712 | - | - |
| 0.4300 | 9650 | 0.3169 | - | - |
| 0.4304 | 9660 | 0.2638 | - | - |
| 0.4309 | 9670 | 0.2821 | - | - |
| 0.4313 | 9680 | 0.2969 | - | - |
| 0.4318 | 9690 | 0.2727 | - | - |
| 0.4322 | 9700 | 0.2858 | - | - |
| 0.4327 | 9710 | 0.2988 | - | - |
| 0.4331 | 9720 | 0.2628 | - | - |
| 0.4336 | 9730 | 0.3027 | - | - |
| 0.4340 | 9740 | 0.2502 | - | - |
| 0.4344 | 9750 | 0.3028 | - | - |
| 0.4349 | 9760 | 0.2381 | - | - |
| 0.4353 | 9770 | 0.2981 | - | - |
| 0.4358 | 9780 | 0.2208 | - | - |
| 0.4362 | 9790 | 0.2433 | - | - |
| 0.4367 | 9800 | 0.2672 | - | - |
| 0.4371 | 9810 | 0.3147 | - | - |
| 0.4376 | 9820 | 0.2655 | - | - |
| 0.4380 | 9830 | 0.273 | - | - |
| 0.4385 | 9840 | 0.3505 | - | - |
| 0.4389 | 9850 | 0.2822 | - | - |
| 0.4393 | 9860 | 0.2682 | - | - |
| 0.4398 | 9870 | 0.294 | - | - |
| 0.4402 | 9880 | 0.3002 | - | - |
| 0.4407 | 9890 | 0.2514 | - | - |
| 0.4411 | 9900 | 0.3193 | - | - |
| 0.4416 | 9910 | 0.2296 | - | - |
| 0.4420 | 9920 | 0.2209 | - | - |
| 0.4425 | 9930 | 0.2961 | - | - |
| 0.4429 | 9940 | 0.297 | - | - |
| 0.4434 | 9950 | 0.2734 | - | - |
| 0.4438 | 9960 | 0.2806 | - | - |
| 0.4442 | 9970 | 0.2634 | - | - |
| 0.4447 | 9980 | 0.3131 | - | - |
| 0.4451 | 9990 | 0.3007 | - | - |
| 0.4456 | 10000 | 0.3299 | 0.7687 | 0.7657 |
| 0.4460 | 10010 | 0.2224 | - | - |
| 0.4465 | 10020 | 0.2891 | - | - |
| 0.4469 | 10030 | 0.2997 | - | - |
| 0.4474 | 10040 | 0.3072 | - | - |
| 0.4478 | 10050 | 0.2657 | - | - |
| 0.4483 | 10060 | 0.2927 | - | - |
| 0.4487 | 10070 | 0.3071 | - | - |
| 0.4491 | 10080 | 0.2734 | - | - |
| 0.4496 | 10090 | 0.3016 | - | - |
| 0.4500 | 10100 | 0.2798 | - | - |
| 0.4505 | 10110 | 0.2845 | - | - |
| 0.4509 | 10120 | 0.2788 | - | - |
| 0.4514 | 10130 | 0.2914 | - | - |
| 0.4518 | 10140 | 0.2693 | - | - |
| 0.4523 | 10150 | 0.2866 | - | - |
| 0.4527 | 10160 | 0.3127 | - | - |
| 0.4532 | 10170 | 0.2743 | - | - |
| 0.4536 | 10180 | 0.3078 | - | - |
| 0.4540 | 10190 | 0.3003 | - | - |
| 0.4545 | 10200 | 0.2872 | - | - |
| 0.4549 | 10210 | 0.2461 | - | - |
| 0.4554 | 10220 | 0.2944 | - | - |
| 0.4558 | 10230 | 0.2765 | - | - |
| 0.4563 | 10240 | 0.2763 | - | - |
| 0.4567 | 10250 | 0.2905 | - | - |
| 0.4572 | 10260 | 0.2856 | - | - |
| 0.4576 | 10270 | 0.2722 | - | - |
| 0.4581 | 10280 | 0.2668 | - | - |
| 0.4585 | 10290 | 0.3014 | - | - |
| 0.4590 | 10300 | 0.3083 | - | - |
| 0.4594 | 10310 | 0.2957 | - | - |
| 0.4598 | 10320 | 0.3093 | - | - |
| 0.4603 | 10330 | 0.3009 | - | - |
| 0.4607 | 10340 | 0.3161 | - | - |
| 0.4612 | 10350 | 0.2737 | - | - |
| 0.4616 | 10360 | 0.2473 | - | - |
| 0.4621 | 10370 | 0.2999 | - | - |
| 0.4625 | 10380 | 0.2943 | - | - |
| 0.4630 | 10390 | 0.2784 | - | - |
| 0.4634 | 10400 | 0.2541 | - | - |
| 0.4639 | 10410 | 0.2731 | - | - |
| 0.4643 | 10420 | 0.2608 | - | - |
| 0.4647 | 10430 | 0.3024 | - | - |
| 0.4652 | 10440 | 0.2563 | - | - |
| 0.4656 | 10450 | 0.2725 | - | - |
| 0.4661 | 10460 | 0.2643 | - | - |
| 0.4665 | 10470 | 0.2627 | - | - |
| 0.4670 | 10480 | 0.2655 | - | - |
| 0.4674 | 10490 | 0.2556 | - | - |
| 0.4679 | 10500 | 0.299 | - | - |
| 0.4683 | 10510 | 0.3286 | - | - |
| 0.4688 | 10520 | 0.3075 | - | - |
| 0.4692 | 10530 | 0.2702 | - | - |
| 0.4696 | 10540 | 0.2688 | - | - |
| 0.4701 | 10550 | 0.29 | - | - |
| 0.4705 | 10560 | 0.2918 | - | - |
| 0.4710 | 10570 | 0.2507 | - | - |
| 0.4714 | 10580 | 0.2849 | - | - |
| 0.4719 | 10590 | 0.2938 | - | - |
| 0.4723 | 10600 | 0.2275 | - | - |
| 0.4728 | 10610 | 0.2662 | - | - |
| 0.4732 | 10620 | 0.2864 | - | - |
| 0.4737 | 10630 | 0.2865 | - | - |
| 0.4741 | 10640 | 0.3094 | - | - |
| 0.4745 | 10650 | 0.2479 | - | - |
| 0.4750 | 10660 | 0.2483 | - | - |
| 0.4754 | 10670 | 0.3166 | - | - |
| 0.4759 | 10680 | 0.2727 | - | - |
| 0.4763 | 10690 | 0.3077 | - | - |
| 0.4768 | 10700 | 0.3076 | - | - |
| 0.4772 | 10710 | 0.2835 | - | - |
| 0.4777 | 10720 | 0.2893 | - | - |
| 0.4781 | 10730 | 0.2889 | - | - |
| 0.4786 | 10740 | 0.279 | - | - |
| 0.4790 | 10750 | 0.2487 | - | - |
| 0.4794 | 10760 | 0.2936 | - | - |
| 0.4799 | 10770 | 0.2471 | - | - |
| 0.4803 | 10780 | 0.2807 | - | - |
| 0.4808 | 10790 | 0.2868 | - | - |
| 0.4812 | 10800 | 0.229 | - | - |
| 0.4817 | 10810 | 0.2683 | - | - |
| 0.4821 | 10820 | 0.2686 | - | - |
| 0.4826 | 10830 | 1.8939 | - | - |
| 0.4830 | 10840 | 0.8922 | - | - |
| 0.4835 | 10850 | 0.9472 | - | - |
| 0.4839 | 10860 | 0.7066 | - | - |
| 0.4843 | 10870 | 0.6178 | - | - |
| 0.4848 | 10880 | 0.6898 | - | - |
| 0.4852 | 10890 | 0.7844 | - | - |
| 0.4857 | 10900 | 0.9946 | - | - |
| 0.4861 | 10910 | 1.3618 | - | - |
| 0.4866 | 10920 | 1.2785 | - | - |
| 0.4870 | 10930 | 0.9415 | - | - |
| 0.4875 | 10940 | 0.753 | - | - |
| 0.4879 | 10950 | 0.6851 | - | - |
| 0.4884 | 10960 | 0.7812 | - | - |
| 0.4888 | 10970 | 0.9856 | - | - |
| 0.4893 | 10980 | 0.7245 | - | - |
| 0.4897 | 10990 | 1.0757 | - | - |
| 0.4901 | 11000 | 0.996 | 0.7854 | 0.7828 |
| 0.4906 | 11010 | 0.8984 | - | - |
| 0.4910 | 11020 | 0.9795 | - | - |
| 0.4915 | 11030 | 0.7918 | - | - |
| 0.4919 | 11040 | 0.7253 | - | - |
| 0.4924 | 11050 | 0.9031 | - | - |
| 0.4928 | 11060 | 0.9121 | - | - |
| 0.4933 | 11070 | 0.68 | - | - |
| 0.4937 | 11080 | 0.5949 | - | - |
| 0.4942 | 11090 | 0.8265 | - | - |
| 0.4946 | 11100 | 0.9904 | - | - |
| 0.4950 | 11110 | 1.0019 | - | - |
| 0.4955 | 11120 | 1.1003 | - | - |
| 0.4959 | 11130 | 0.7394 | - | - |
| 0.4964 | 11140 | 0.873 | - | - |
| 0.4968 | 11150 | 0.8108 | - | - |
| 0.4973 | 11160 | 0.8597 | - | - |
| 0.4977 | 11170 | 0.8456 | - | - |
| 0.4982 | 11180 | 0.8565 | - | - |
| 0.4986 | 11190 | 0.927 | - | - |
| 0.4991 | 11200 | 0.7665 | - | - |
| 0.4995 | 11210 | 0.5243 | - | - |
| 0.4999 | 11220 | 0.2878 | - | - |
| 0.5004 | 11230 | 0.4855 | - | - |
| 0.5008 | 11240 | 0.7549 | - | - |
| 0.5013 | 11250 | 0.6238 | - | - |
| 0.5017 | 11260 | 0.5168 | - | - |
| 0.5022 | 11270 | 0.4326 | - | - |
| 0.5026 | 11280 | 0.4716 | - | - |
| 0.5031 | 11290 | 0.3107 | - | - |
| 0.5035 | 11300 | 0.4574 | - | - |
| 0.5040 | 11310 | 0.4029 | - | - |
| 0.5044 | 11320 | 0.3456 | - | - |
| 0.5048 | 11330 | 0.4598 | - | - |
| 0.5053 | 11340 | 0.466 | - | - |
| 0.5057 | 11350 | 0.4424 | - | - |
| 0.5062 | 11360 | 0.4651 | - | - |
| 0.5066 | 11370 | 0.467 | - | - |
| 0.5071 | 11380 | 0.4323 | - | - |
| 0.5075 | 11390 | 0.4993 | - | - |
| 0.5080 | 11400 | 0.5946 | - | - |
| 0.5084 | 11410 | 0.7139 | - | - |
| 0.5089 | 11420 | 0.7657 | - | - |
| 0.5093 | 11430 | 0.7255 | - | - |
| 0.5097 | 11440 | 0.8461 | - | - |
| 0.5102 | 11450 | 0.6687 | - | - |
| 0.5106 | 11460 | 0.5091 | - | - |
| 0.5111 | 11470 | 0.3306 | - | - |
| 0.5115 | 11480 | 0.4152 | - | - |
| 0.5120 | 11490 | 0.3588 | - | - |
| 0.5124 | 11500 | 0.2542 | - | - |
| 0.5129 | 11510 | 0.5537 | - | - |
| 0.5133 | 11520 | 0.3634 | - | - |
| 0.5138 | 11530 | 0.4235 | - | - |
| 0.5142 | 11540 | 0.4202 | - | - |
| 0.5146 | 11550 | 0.5469 | - | - |
| 0.5151 | 11560 | 0.324 | - | - |
| 0.5155 | 11570 | 0.2884 | - | - |
| 0.5160 | 11580 | 0.4072 | - | - |
| 0.5164 | 11590 | 0.4224 | - | - |
| 0.5169 | 11600 | 0.3676 | - | - |
| 0.5173 | 11610 | 0.5243 | - | - |
| 0.5178 | 11620 | 0.5065 | - | - |
| 0.5182 | 11630 | 0.4646 | - | - |
| 0.5187 | 11640 | 0.4851 | - | - |
| 0.5191 | 11650 | 0.4187 | - | - |
| 0.5195 | 11660 | 0.4419 | - | - |
| 0.5200 | 11670 | 0.5056 | - | - |
| 0.5204 | 11680 | 0.404 | - | - |
| 0.5209 | 11690 | 0.2907 | - | - |
| 0.5213 | 11700 | 0.4586 | - | - |
| 0.5218 | 11710 | 0.3216 | - | - |
| 0.5222 | 11720 | 0.301 | - | - |
| 0.5227 | 11730 | 0.5921 | - | - |
| 0.5231 | 11740 | 0.7519 | - | - |
| 0.5236 | 11750 | 0.6452 | - | - |
| 0.5240 | 11760 | 0.5754 | - | - |
| 0.5245 | 11770 | 0.6165 | - | - |
| 0.5249 | 11780 | 0.5047 | - | - |
| 0.5253 | 11790 | 0.4663 | - | - |
| 0.5258 | 11800 | 0.5821 | - | - |
| 0.5262 | 11810 | 0.6243 | - | - |
| 0.5267 | 11820 | 0.6297 | - | - |
| 0.5271 | 11830 | 0.6245 | - | - |
| 0.5276 | 11840 | 0.481 | - | - |
| 0.5280 | 11850 | 0.4765 | - | - |
| 0.5285 | 11860 | 0.6135 | - | - |
| 0.5289 | 11870 | 0.5482 | - | - |
| 0.5294 | 11880 | 0.5489 | - | - |
| 0.5298 | 11890 | 0.3876 | - | - |
| 0.5302 | 11900 | 0.4581 | - | - |
| 0.5307 | 11910 | 0.4316 | - | - |
| 0.5311 | 11920 | 0.598 | - | - |
| 0.5316 | 11930 | 0.5204 | - | - |
| 0.5320 | 11940 | 0.3851 | - | - |
| 0.5325 | 11950 | 0.318 | - | - |
| 0.5329 | 11960 | 0.4887 | - | - |
| 0.5334 | 11970 | 0.6857 | - | - |
| 0.5338 | 11980 | 0.4579 | - | - |
| 0.5343 | 11990 | 0.2892 | - | - |
| 0.5347 | 12000 | 0.3245 | 0.7634 | 0.7602 |
| 0.5351 | 12010 | 0.3557 | - | - |
| 0.5356 | 12020 | 0.2726 | - | - |
| 0.5360 | 12030 | 0.4119 | - | - |
| 0.5365 | 12040 | 0.5011 | - | - |
| 0.5369 | 12050 | 0.3544 | - | - |
| 0.5374 | 12060 | 0.5049 | - | - |
| 0.5378 | 12070 | 0.3972 | - | - |
| 0.5383 | 12080 | 0.4198 | - | - |
| 0.5387 | 12090 | 0.398 | - | - |
| 0.5392 | 12100 | 0.4202 | - | - |
| 0.5396 | 12110 | 0.5535 | - | - |
| 0.5400 | 12120 | 0.4567 | - | - |
| 0.5405 | 12130 | 0.3574 | - | - |
| 0.5409 | 12140 | 0.5295 | - | - |
| 0.5414 | 12150 | 0.5034 | - | - |
| 0.5418 | 12160 | 0.7229 | - | - |
| 0.5423 | 12170 | 0.6904 | - | - |
| 0.5427 | 12180 | 0.5902 | - | - |
| 0.5432 | 12190 | 0.7509 | - | - |
| 0.5436 | 12200 | 0.7589 | - | - |
| 0.5441 | 12210 | 1.1649 | - | - |
| 0.5445 | 12220 | 0.9536 | - | - |
| 0.5449 | 12230 | 0.7541 | - | - |
| 0.5454 | 12240 | 0.4796 | - | - |
| 0.5458 | 12250 | 0.3174 | - | - |
| 0.5463 | 12260 | 0.5638 | - | - |
| 0.5467 | 12270 | 0.4724 | - | - |
| 0.5472 | 12280 | 0.5634 | - | - |
| 0.5476 | 12290 | 0.5743 | - | - |
| 0.5481 | 12300 | 0.4831 | - | - |
| 0.5485 | 12310 | 0.4186 | - | - |
| 0.5490 | 12320 | 0.6252 | - | - |
| 0.5494 | 12330 | 0.3462 | - | - |
| 0.5498 | 12340 | 0.5619 | - | - |
| 0.5503 | 12350 | 0.523 | - | - |
| 0.5507 | 12360 | 0.6483 | - | - |
| 0.5512 | 12370 | 0.4535 | - | - |
| 0.5516 | 12380 | 0.5385 | - | - |
| 0.5521 | 12390 | 0.5842 | - | - |
| 0.5525 | 12400 | 0.5908 | - | - |
| 0.5530 | 12410 | 0.6554 | - | - |
| 0.5534 | 12420 | 0.4226 | - | - |
| 0.5539 | 12430 | 0.5474 | - | - |
| 0.5543 | 12440 | 0.5548 | - | - |
| 0.5548 | 12450 | 0.4978 | - | - |
| 0.5552 | 12460 | 0.577 | - | - |
| 0.5556 | 12470 | 0.4582 | - | - |
| 0.5561 | 12480 | 0.4442 | - | - |
| 0.5565 | 12490 | 0.5035 | - | - |
| 0.5570 | 12500 | 0.5048 | - | - |
| 0.5574 | 12510 | 0.4682 | - | - |
| 0.5579 | 12520 | 0.5447 | - | - |
| 0.5583 | 12530 | 0.3742 | - | - |
| 0.5588 | 12540 | 0.5258 | - | - |
| 0.5592 | 12550 | 0.4223 | - | - |
| 0.5597 | 12560 | 0.4796 | - | - |
| 0.5601 | 12570 | 0.5129 | - | - |
| 0.5605 | 12580 | 0.2938 | - | - |
| 0.5610 | 12590 | 0.3879 | - | - |
| 0.5614 | 12600 | 0.497 | - | - |
| 0.5619 | 12610 | 0.4239 | - | - |
| 0.5623 | 12620 | 0.356 | - | - |
| 0.5628 | 12630 | 0.5157 | - | - |
| 0.5632 | 12640 | 0.5184 | - | - |
| 0.5637 | 12650 | 0.5824 | - | - |
| 0.5641 | 12660 | 0.5635 | - | - |
| 0.5646 | 12670 | 0.3486 | - | - |
| 0.5650 | 12680 | 0.3022 | - | - |
| 0.5654 | 12690 | 0.4913 | - | - |
| 0.5659 | 12700 | 0.447 | - | - |
| 0.5663 | 12710 | 0.3714 | - | - |
| 0.5668 | 12720 | 0.5712 | - | - |
| 0.5672 | 12730 | 0.3758 | - | - |
| 0.5677 | 12740 | 0.5869 | - | - |
| 0.5681 | 12750 | 0.5138 | - | - |
| 0.5686 | 12760 | 0.5118 | - | - |
| 0.5690 | 12770 | 0.5657 | - | - |
| 0.5695 | 12780 | 0.4573 | - | - |
| 0.5699 | 12790 | 0.4634 | - | - |
| 0.5703 | 12800 | 0.5607 | - | - |
| 0.5708 | 12810 | 0.5165 | - | - |
| 0.5712 | 12820 | 0.7618 | - | - |
| 0.5717 | 12830 | 0.6403 | - | - |
| 0.5721 | 12840 | 0.7764 | - | - |
| 0.5726 | 12850 | 0.5983 | - | - |
| 0.5730 | 12860 | 0.4542 | - | - |
| 0.5735 | 12870 | 0.5369 | - | - |
| 0.5739 | 12880 | 0.609 | - | - |
| 0.5744 | 12890 | 0.7868 | - | - |
| 0.5748 | 12900 | 0.5426 | - | - |
| 0.5752 | 12910 | 0.6825 | - | - |
| 0.5757 | 12920 | 0.9235 | - | - |
| 0.5761 | 12930 | 0.794 | - | - |
| 0.5766 | 12940 | 0.6463 | - | - |
| 0.5770 | 12950 | 0.5675 | - | - |
| 0.5775 | 12960 | 0.5504 | - | - |
| 0.5779 | 12970 | 0.5388 | - | - |
| 0.5784 | 12980 | 0.5311 | - | - |
| 0.5788 | 12990 | 0.4888 | - | - |
| 0.5793 | 13000 | 0.5829 | 0.7793 | 0.7755 |
| 0.5797 | 13010 | 0.4561 | - | - |
| 0.5801 | 13020 | 0.6509 | - | - |
| 0.5806 | 13030 | 0.6399 | - | - |
| 0.5810 | 13040 | 0.5947 | - | - |
| 0.5815 | 13050 | 0.5671 | - | - |
| 0.5819 | 13060 | 0.4247 | - | - |
| 0.5824 | 13070 | 0.4867 | - | - |
| 0.5828 | 13080 | 0.4994 | - | - |
| 0.5833 | 13090 | 0.6435 | - | - |
| 0.5837 | 13100 | 0.5342 | - | - |
| 0.5842 | 13110 | 0.4914 | - | - |
| 0.5846 | 13120 | 0.3861 | - | - |
| 0.5851 | 13130 | 0.5282 | - | - |
| 0.5855 | 13140 | 0.5398 | - | - |
| 0.5859 | 13150 | 0.4092 | - | - |
| 0.5864 | 13160 | 0.3806 | - | - |
| 0.5868 | 13170 | 0.4765 | - | - |
| 0.5873 | 13180 | 0.4142 | - | - |
| 0.5877 | 13190 | 0.5128 | - | - |
| 0.5882 | 13200 | 0.4144 | - | - |
| 0.5886 | 13210 | 0.5451 | - | - |
| 0.5891 | 13220 | 0.6271 | - | - |
| 0.5895 | 13230 | 0.5184 | - | - |
| 0.5900 | 13240 | 0.5295 | - | - |
| 0.5904 | 13250 | 0.6778 | - | - |
| 0.5908 | 13260 | 0.4314 | - | - |
| 0.5913 | 13270 | 0.6191 | - | - |
| 0.5917 | 13280 | 0.5368 | - | - |
| 0.5922 | 13290 | 0.5887 | - | - |
| 0.5926 | 13300 | 0.4649 | - | - |
| 0.5931 | 13310 | 0.5456 | - | - |
| 0.5935 | 13320 | 0.6386 | - | - |
| 0.5940 | 13330 | 0.5103 | - | - |
| 0.5944 | 13340 | 0.4517 | - | - |
| 0.5949 | 13350 | 0.6417 | - | - |
| 0.5953 | 13360 | 0.5603 | - | - |
| 0.5957 | 13370 | 0.4754 | - | - |
| 0.5962 | 13380 | 0.751 | - | - |
| 0.5966 | 13390 | 0.6738 | - | - |
| 0.5971 | 13400 | 0.5787 | - | - |
| 0.5975 | 13410 | 0.6515 | - | - |
| 0.5980 | 13420 | 0.5561 | - | - |
| 0.5984 | 13430 | 0.4203 | - | - |
| 0.5989 | 13440 | 0.5375 | - | - |
| 0.5993 | 13450 | 0.665 | - | - |
| 0.5998 | 13460 | 0.5822 | - | - |
| 0.6002 | 13470 | 0.7468 | - | - |
| 0.6006 | 13480 | 0.5974 | - | - |
| 0.6011 | 13490 | 0.5607 | - | - |
| 0.6015 | 13500 | 0.6841 | - | - |
| 0.6020 | 13510 | 0.5027 | - | - |
| 0.6024 | 13520 | 0.428 | - | - |
| 0.6029 | 13530 | 0.5472 | - | - |
| 0.6033 | 13540 | 0.5459 | - | - |
| 0.6038 | 13550 | 0.5012 | - | - |
| 0.6042 | 13560 | 0.7001 | - | - |
| 0.6047 | 13570 | 0.5486 | - | - |
| 0.6051 | 13580 | 0.5094 | - | - |
| 0.6055 | 13590 | 0.5448 | - | - |
| 0.6060 | 13600 | 0.5699 | - | - |
| 0.6064 | 13610 | 0.6869 | - | - |
| 0.6069 | 13620 | 0.5023 | - | - |
| 0.6073 | 13630 | 0.5085 | - | - |
| 0.6078 | 13640 | 0.518 | - | - |
| 0.6082 | 13650 | 0.6766 | - | - |
| 0.6087 | 13660 | 0.5309 | - | - |
| 0.6091 | 13670 | 0.6211 | - | - |
| 0.6096 | 13680 | 0.3251 | - | - |
| 0.6100 | 13690 | 0.5166 | - | - |
| 0.6104 | 13700 | 0.6379 | - | - |
| 0.6109 | 13710 | 0.6241 | - | - |
| 0.6113 | 13720 | 0.7437 | - | - |
| 0.6118 | 13730 | 0.812 | - | - |
| 0.6122 | 13740 | 0.7919 | - | - |
| 0.6127 | 13750 | 0.463 | - | - |
| 0.6131 | 13760 | 0.4957 | - | - |
| 0.6136 | 13770 | 0.668 | - | - |
| 0.6140 | 13780 | 0.6703 | - | - |
| 0.6145 | 13790 | 0.5042 | - | - |
| 0.6149 | 13800 | 0.6478 | - | - |
| 0.6154 | 13810 | 0.6265 | - | - |
| 0.6158 | 13820 | 0.676 | - | - |
| 0.6162 | 13830 | 0.673 | - | - |
| 0.6167 | 13840 | 0.6998 | - | - |
| 0.6171 | 13850 | 0.6694 | - | - |
| 0.6176 | 13860 | 0.5882 | - | - |
| 0.6180 | 13870 | 0.6053 | - | - |
| 0.6185 | 13880 | 0.733 | - | - |
| 0.6189 | 13890 | 0.5314 | - | - |
| 0.6194 | 13900 | 0.5823 | - | - |
| 0.6198 | 13910 | 0.6317 | - | - |
| 0.6203 | 13920 | 0.4119 | - | - |
| 0.6207 | 13930 | 0.5587 | - | - |
| 0.6211 | 13940 | 0.6781 | - | - |
| 0.6216 | 13950 | 0.6522 | - | - |
| 0.6220 | 13960 | 0.5028 | - | - |
| 0.6225 | 13970 | 0.5888 | - | - |
| 0.6229 | 13980 | 0.5828 | - | - |
| 0.6234 | 13990 | 0.7167 | - | - |
| 0.6238 | 14000 | 0.5071 | 0.7750 | 0.7701 |
| 0.6243 | 14010 | 0.504 | - | - |
| 0.6247 | 14020 | 0.5413 | - | - |
| 0.6252 | 14030 | 0.3984 | - | - |
| 0.6256 | 14040 | 0.5869 | - | - |
| 0.6260 | 14050 | 0.7178 | - | - |
| 0.6265 | 14060 | 0.5403 | - | - |
| 0.6269 | 14070 | 0.5818 | - | - |
| 0.6274 | 14080 | 0.56 | - | - |
| 0.6278 | 14090 | 0.5358 | - | - |
| 0.6283 | 14100 | 0.6581 | - | - |
| 0.6287 | 14110 | 0.5759 | - | - |
| 0.6292 | 14120 | 0.506 | - | - |
| 0.6296 | 14130 | 0.5693 | - | - |
| 0.6301 | 14140 | 0.4833 | - | - |
| 0.6305 | 14150 | 0.437 | - | - |
| 0.6309 | 14160 | 0.5275 | - | - |
| 0.6314 | 14170 | 0.4341 | - | - |
| 0.6318 | 14180 | 0.519 | - | - |
| 0.6323 | 14190 | 0.5814 | - | - |
| 0.6327 | 14200 | 0.5048 | - | - |
| 0.6332 | 14210 | 0.6698 | - | - |
| 0.6336 | 14220 | 0.4615 | - | - |
| 0.6341 | 14230 | 0.5296 | - | - |
| 0.6345 | 14240 | 0.6698 | - | - |
| 0.6350 | 14250 | 0.6957 | - | - |
| 0.6354 | 14260 | 0.6262 | - | - |
| 0.6358 | 14270 | 0.4748 | - | - |
| 0.6363 | 14280 | 0.3844 | - | - |
| 0.6367 | 14290 | 0.4154 | - | - |
| 0.6372 | 14300 | 0.5885 | - | - |
| 0.6376 | 14310 | 0.7601 | - | - |
| 0.6381 | 14320 | 0.5124 | - | - |
| 0.6385 | 14330 | 0.5676 | - | - |
| 0.6390 | 14340 | 0.6851 | - | - |
| 0.6394 | 14350 | 0.4901 | - | - |
| 0.6399 | 14360 | 0.6241 | - | - |
| 0.6403 | 14370 | 0.6507 | - | - |
| 0.6407 | 14380 | 0.6205 | - | - |
| 0.6412 | 14390 | 0.6978 | - | - |
| 0.6416 | 14400 | 0.8198 | - | - |
| 0.6421 | 14410 | 0.4881 | - | - |
| 0.6425 | 14420 | 0.5284 | - | - |
| 0.6430 | 14430 | 0.5135 | - | - |
| 0.6434 | 14440 | 0.6959 | - | - |
| 0.6439 | 14450 | 0.5884 | - | - |
| 0.6443 | 14460 | 0.7503 | - | - |
| 0.6448 | 14470 | 0.6128 | - | - |
| 0.6452 | 14480 | 0.6051 | - | - |
| 0.6456 | 14490 | 0.6184 | - | - |
| 0.6461 | 14500 | 0.4909 | - | - |
| 0.6465 | 14510 | 0.4208 | - | - |
| 0.6470 | 14520 | 0.704 | - | - |
| 0.6474 | 14530 | 0.5478 | - | - |
| 0.6479 | 14540 | 0.6603 | - | - |
| 0.6483 | 14550 | 0.5675 | - | - |
| 0.6488 | 14560 | 0.4911 | - | - |
| 0.6492 | 14570 | 0.4376 | - | - |
| 0.6497 | 14580 | 0.4739 | - | - |
| 0.6501 | 14590 | 0.5139 | - | - |
| 0.6506 | 14600 | 0.6323 | - | - |
| 0.6510 | 14610 | 0.6989 | - | - |
| 0.6514 | 14620 | 0.4663 | - | - |
| 0.6519 | 14630 | 0.6283 | - | - |
| 0.6523 | 14640 | 0.5338 | - | - |
| 0.6528 | 14650 | 0.5181 | - | - |
| 0.6532 | 14660 | 0.4779 | - | - |
| 0.6537 | 14670 | 0.4727 | - | - |
| 0.6541 | 14680 | 0.5531 | - | - |
| 0.6546 | 14690 | 0.5424 | - | - |
| 0.6550 | 14700 | 0.5559 | - | - |
| 0.6555 | 14710 | 0.5618 | - | - |
| 0.6559 | 14720 | 0.5181 | - | - |
| 0.6563 | 14730 | 0.7071 | - | - |
| 0.6568 | 14740 | 0.6763 | - | - |
| 0.6572 | 14750 | 0.5631 | - | - |
| 0.6577 | 14760 | 0.555 | - | - |
| 0.6581 | 14770 | 0.4795 | - | - |
| 0.6586 | 14780 | 0.6049 | - | - |
| 0.6590 | 14790 | 0.7414 | - | - |
| 0.6595 | 14800 | 0.4749 | - | - |
| 0.6599 | 14810 | 0.5419 | - | - |
| 0.6604 | 14820 | 0.5846 | - | - |
| 0.6608 | 14830 | 0.5745 | - | - |
| 0.6612 | 14840 | 0.539 | - | - |
| 0.6617 | 14850 | 0.5156 | - | - |
| 0.6621 | 14860 | 0.5475 | - | - |
| 0.6626 | 14870 | 0.594 | - | - |
| 0.6630 | 14880 | 0.6586 | - | - |
| 0.6635 | 14890 | 0.6606 | - | - |
| 0.6639 | 14900 | 0.6792 | - | - |
| 0.6644 | 14910 | 0.5392 | - | - |
| 0.6648 | 14920 | 0.6391 | - | - |
| 0.6653 | 14930 | 0.5458 | - | - |
| 0.6657 | 14940 | 0.624 | - | - |
| 0.6661 | 14950 | 0.5363 | - | - |
| 0.6666 | 14960 | 0.6601 | - | - |
| 0.6670 | 14970 | 0.5433 | - | - |
| 0.6675 | 14980 | 0.6944 | - | - |
| 0.6679 | 14990 | 0.6501 | - | - |
| 0.6684 | 15000 | 0.5094 | 0.7516 | 0.7474 |
| 0.6688 | 15010 | 0.402 | - | - |
| 0.6693 | 15020 | 0.5112 | - | - |
| 0.6697 | 15030 | 0.5717 | - | - |
| 0.6702 | 15040 | 0.5683 | - | - |
| 0.6706 | 15050 | 0.5695 | - | - |
| 0.6710 | 15060 | 0.5256 | - | - |
| 0.6715 | 15070 | 0.3821 | - | - |
| 0.6719 | 15080 | 0.5766 | - | - |
| 0.6724 | 15090 | 0.6759 | - | - |
| 0.6728 | 15100 | 0.527 | - | - |
| 0.6733 | 15110 | 0.6104 | - | - |
| 0.6737 | 15120 | 0.5227 | - | - |
| 0.6742 | 15130 | 0.4991 | - | - |
| 0.6746 | 15140 | 0.5098 | - | - |
| 0.6751 | 15150 | 0.4574 | - | - |
| 0.6755 | 15160 | 0.579 | - | - |
| 0.6759 | 15170 | 0.6386 | - | - |
| 0.6764 | 15180 | 0.4503 | - | - |
| 0.6768 | 15190 | 0.566 | - | - |
| 0.6773 | 15200 | 0.7506 | - | - |
| 0.6777 | 15210 | 0.7889 | - | - |
| 0.6782 | 15220 | 0.7078 | - | - |
| 0.6786 | 15230 | 0.5937 | - | - |
| 0.6791 | 15240 | 0.7335 | - | - |
| 0.6795 | 15250 | 0.4405 | - | - |
| 0.6800 | 15260 | 0.5401 | - | - |
| 0.6804 | 15270 | 0.571 | - | - |
| 0.6809 | 15280 | 0.5536 | - | - |
| 0.6813 | 15290 | 0.5679 | - | - |
| 0.6817 | 15300 | 0.4975 | - | - |
| 0.6822 | 15310 | 0.4321 | - | - |
| 0.6826 | 15320 | 0.5781 | - | - |
| 0.6831 | 15330 | 0.5391 | - | - |
| 0.6835 | 15340 | 0.4769 | - | - |
| 0.6840 | 15350 | 0.643 | - | - |
| 0.6844 | 15360 | 0.5703 | - | - |
| 0.6849 | 15370 | 0.6725 | - | - |
| 0.6853 | 15380 | 0.4105 | - | - |
| 0.6858 | 15390 | 0.6465 | - | - |
| 0.6862 | 15400 | 0.6231 | - | - |
| 0.6866 | 15410 | 0.5094 | - | - |
| 0.6871 | 15420 | 0.5107 | - | - |
| 0.6875 | 15430 | 0.5685 | - | - |
| 0.6880 | 15440 | 0.4415 | - | - |
| 0.6884 | 15450 | 0.4315 | - | - |
| 0.6889 | 15460 | 0.5188 | - | - |
| 0.6893 | 15470 | 0.5184 | - | - |
| 0.6898 | 15480 | 0.5266 | - | - |
| 0.6902 | 15490 | 0.5655 | - | - |
| 0.6907 | 15500 | 0.5594 | - | - |
| 0.6911 | 15510 | 0.4806 | - | - |
| 0.6915 | 15520 | 0.7032 | - | - |
| 0.6920 | 15530 | 0.6974 | - | - |
| 0.6924 | 15540 | 0.6555 | - | - |
| 0.6929 | 15550 | 0.5087 | - | - |
| 0.6933 | 15560 | 0.6331 | - | - |
| 0.6938 | 15570 | 0.6514 | - | - |
| 0.6942 | 15580 | 0.5982 | - | - |
| 0.6947 | 15590 | 0.4068 | - | - |
| 0.6951 | 15600 | 0.6159 | - | - |
| 0.6956 | 15610 | 0.6492 | - | - |
| 0.6960 | 15620 | 0.6159 | - | - |
| 0.6964 | 15630 | 0.6345 | - | - |
| 0.6969 | 15640 | 0.4102 | - | - |
| 0.6973 | 15650 | 0.5313 | - | - |
| 0.6978 | 15660 | 0.5476 | - | - |
| 0.6982 | 15670 | 0.4904 | - | - |
| 0.6987 | 15680 | 0.5541 | - | - |
| 0.6991 | 15690 | 0.4438 | - | - |
| 0.6996 | 15700 | 0.5396 | - | - |
| 0.7000 | 15710 | 0.4583 | - | - |
| 0.7005 | 15720 | 0.6321 | - | - |
| 0.7009 | 15730 | 0.5023 | - | - |
| 0.7013 | 15740 | 0.5447 | - | - |
| 0.7018 | 15750 | 0.4839 | - | - |
| 0.7022 | 15760 | 0.2881 | - | - |
| 0.7027 | 15770 | 0.565 | - | - |
| 0.7031 | 15780 | 0.6217 | - | - |
| 0.7036 | 15790 | 0.8223 | - | - |
| 0.7040 | 15800 | 0.49 | - | - |
| 0.7045 | 15810 | 0.6942 | - | - |
| 0.7049 | 15820 | 0.5618 | - | - |
| 0.7054 | 15830 | 0.4518 | - | - |
| 0.7058 | 15840 | 0.4746 | - | - |
| 0.7062 | 15850 | 0.5028 | - | - |
| 0.7067 | 15860 | 0.5187 | - | - |
| 0.7071 | 15870 | 0.5187 | - | - |
| 0.7076 | 15880 | 0.5386 | - | - |
| 0.7080 | 15890 | 0.4833 | - | - |
| 0.7085 | 15900 | 0.4029 | - | - |
| 0.7089 | 15910 | 0.5607 | - | - |
| 0.7094 | 15920 | 0.4192 | - | - |
| 0.7098 | 15930 | 0.4641 | - | - |
| 0.7103 | 15940 | 0.6046 | - | - |
| 0.7107 | 15950 | 0.4561 | - | - |
| 0.7112 | 15960 | 0.5743 | - | - |
| 0.7116 | 15970 | 0.5099 | - | - |
| 0.7120 | 15980 | 0.5778 | - | - |
| 0.7125 | 15990 | 0.4376 | - | - |
| 0.7129 | 16000 | 0.4932 | 0.7523 | 0.7492 |
| 0.7134 | 16010 | 0.7829 | - | - |
| 0.7138 | 16020 | 0.6266 | - | - |
| 0.7143 | 16030 | 0.5134 | - | - |
| 0.7147 | 16040 | 0.5033 | - | - |
| 0.7152 | 16050 | 0.5367 | - | - |
| 0.7156 | 16060 | 0.4508 | - | - |
| 0.7161 | 16070 | 0.7549 | - | - |
| 0.7165 | 16080 | 0.7274 | - | - |
| 0.7169 | 16090 | 0.5064 | - | - |
| 0.7174 | 16100 | 0.5051 | - | - |
| 0.7178 | 16110 | 0.3907 | - | - |
| 0.7183 | 16120 | 0.5351 | - | - |
| 0.7187 | 16130 | 0.5931 | - | - |
| 0.7192 | 16140 | 0.5771 | - | - |
| 0.7196 | 16150 | 0.53 | - | - |
| 0.7201 | 16160 | 0.6805 | - | - |
| 0.7205 | 16170 | 0.5097 | - | - |
| 0.7210 | 16180 | 0.593 | - | - |
| 0.7214 | 16190 | 0.4298 | - | - |
| 0.7218 | 16200 | 0.589 | - | - |
| 0.7223 | 16210 | 0.7176 | - | - |
| 0.7227 | 16220 | 0.5244 | - | - |
| 0.7232 | 16230 | 0.4668 | - | - |
| 0.7236 | 16240 | 0.5821 | - | - |
| 0.7241 | 16250 | 0.6241 | - | - |
| 0.7245 | 16260 | 0.4775 | - | - |
| 0.7250 | 16270 | 0.5743 | - | - |
| 0.7254 | 16280 | 0.3967 | - | - |
| 0.7259 | 16290 | 0.4876 | - | - |
| 0.7263 | 16300 | 0.4058 | - | - |
| 0.7267 | 16310 | 0.4601 | - | - |
| 0.7272 | 16320 | 0.5654 | - | - |
| 0.7276 | 16330 | 0.6028 | - | - |
| 0.7281 | 16340 | 0.6415 | - | - |
| 0.7285 | 16350 | 0.3167 | - | - |
| 0.7290 | 16360 | 0.5339 | - | - |
| 0.7294 | 16370 | 0.7043 | - | - |
| 0.7299 | 16380 | 0.7496 | - | - |
| 0.7303 | 16390 | 0.4897 | - | - |
| 0.7308 | 16400 | 0.518 | - | - |
| 0.7312 | 16410 | 0.5364 | - | - |
| 0.7316 | 16420 | 0.5121 | - | - |
| 0.7321 | 16430 | 0.3781 | - | - |
| 0.7325 | 16440 | 0.4174 | - | - |
| 0.7330 | 16450 | 0.5763 | - | - |
| 0.7334 | 16460 | 0.5051 | - | - |
| 0.7339 | 16470 | 0.5612 | - | - |
| 0.7343 | 16480 | 0.4781 | - | - |
| 0.7348 | 16490 | 0.5336 | - | - |
| 0.7352 | 16500 | 0.9319 | - | - |
| 0.7357 | 16510 | 0.6356 | - | - |
| 0.7361 | 16520 | 0.8033 | - | - |
| 0.7365 | 16530 | 0.7314 | - | - |
| 0.7370 | 16540 | 0.732 | - | - |
| 0.7374 | 16550 | 0.5793 | - | - |
| 0.7379 | 16560 | 0.601 | - | - |
| 0.7383 | 16570 | 0.5375 | - | - |
| 0.7388 | 16580 | 0.6522 | - | - |
| 0.7392 | 16590 | 0.3866 | - | - |
| 0.7397 | 16600 | 0.7183 | - | - |
| 0.7401 | 16610 | 0.5708 | - | - |
| 0.7406 | 16620 | 0.6024 | - | - |
| 0.7410 | 16630 | 0.4987 | - | - |
| 0.7415 | 16640 | 0.5332 | - | - |
| 0.7419 | 16650 | 0.5072 | - | - |
| 0.7423 | 16660 | 0.4379 | - | - |
| 0.7428 | 16670 | 0.6513 | - | - |
| 0.7432 | 16680 | 0.499 | - | - |
| 0.7437 | 16690 | 0.4742 | - | - |
| 0.7441 | 16700 | 0.6756 | - | - |
| 0.7446 | 16710 | 0.3494 | - | - |
| 0.7450 | 16720 | 0.4907 | - | - |
| 0.7455 | 16730 | 0.5969 | - | - |
| 0.7459 | 16740 | 0.6896 | - | - |
| 0.7464 | 16750 | 0.5148 | - | - |
| 0.7468 | 16760 | 0.6306 | - | - |
| 0.7472 | 16770 | 0.5164 | - | - |
| 0.7477 | 16780 | 0.3607 | - | - |
| 0.7481 | 16790 | 0.4972 | - | - |
| 0.7486 | 16800 | 0.5279 | - | - |
| 0.7490 | 16810 | 0.5625 | - | - |
| 0.7495 | 16820 | 0.4866 | - | - |
| 0.7499 | 16830 | 0.3799 | - | - |
| 0.7504 | 16840 | 0.5623 | - | - |
| 0.7508 | 16850 | 0.586 | - | - |
| 0.7513 | 16860 | 0.59 | - | - |
| 0.7517 | 16870 | 0.433 | - | - |
| 0.7521 | 16880 | 0.7061 | - | - |
| 0.7526 | 16890 | 0.4659 | - | - |
| 0.7530 | 16900 | 0.4547 | - | - |
| 0.7535 | 16910 | 0.5156 | - | - |
| 0.7539 | 16920 | 0.4009 | - | - |
| 0.7544 | 16930 | 0.7071 | - | - |
| 0.7548 | 16940 | 0.4805 | - | - |
| 0.7553 | 16950 | 0.5267 | - | - |
| 0.7557 | 16960 | 0.4446 | - | - |
| 0.7562 | 16970 | 0.5919 | - | - |
| 0.7566 | 16980 | 0.5042 | - | - |
| 0.7570 | 16990 | 0.5339 | - | - |
| 0.7575 | 17000 | 0.5699 | 0.7496 | 0.7462 |
| 0.7579 | 17010 | 0.4346 | - | - |
| 0.7584 | 17020 | 0.4169 | - | - |
| 0.7588 | 17030 | 0.5988 | - | - |
| 0.7593 | 17040 | 0.4998 | - | - |
| 0.7597 | 17050 | 0.3809 | - | - |
| 0.7602 | 17060 | 0.4926 | - | - |
| 0.7606 | 17070 | 0.5523 | - | - |
| 0.7611 | 17080 | 0.515 | - | - |
| 0.7615 | 17090 | 0.5585 | - | - |
| 0.7619 | 17100 | 0.5219 | - | - |
| 0.7624 | 17110 | 0.5232 | - | - |
| 0.7628 | 17120 | 0.5359 | - | - |
| 0.7633 | 17130 | 0.8287 | - | - |
| 0.7637 | 17140 | 0.5073 | - | - |
| 0.7642 | 17150 | 0.4863 | - | - |
| 0.7646 | 17160 | 0.488 | - | - |
| 0.7651 | 17170 | 0.6904 | - | - |
| 0.7655 | 17180 | 0.6651 | - | - |
| 0.7660 | 17190 | 0.4431 | - | - |
| 0.7664 | 17200 | 0.4804 | - | - |
| 0.7668 | 17210 | 0.4422 | - | - |
| 0.7673 | 17220 | 0.3749 | - | - |
| 0.7677 | 17230 | 0.5685 | - | - |
| 0.7682 | 17240 | 0.5251 | - | - |
| 0.7686 | 17250 | 0.5245 | - | - |
| 0.7691 | 17260 | 0.6165 | - | - |
| 0.7695 | 17270 | 0.4759 | - | - |
| 0.7700 | 17280 | 0.5169 | - | - |
| 0.7704 | 17290 | 0.5229 | - | - |
| 0.7709 | 17300 | 0.553 | - | - |
| 0.7713 | 17310 | 0.583 | - | - |
| 0.7718 | 17320 | 0.5306 | - | - |
| 0.7722 | 17330 | 0.501 | - | - |
| 0.7726 | 17340 | 0.53 | - | - |
| 0.7731 | 17350 | 0.5134 | - | - |
| 0.7735 | 17360 | 0.4512 | - | - |
| 0.7740 | 17370 | 0.5617 | - | - |
| 0.7744 | 17380 | 0.6177 | - | - |
| 0.7749 | 17390 | 0.5851 | - | - |
| 0.7753 | 17400 | 0.4745 | - | - |
| 0.7758 | 17410 | 0.6976 | - | - |
| 0.7762 | 17420 | 0.6045 | - | - |
| 0.7767 | 17430 | 0.6545 | - | - |
| 0.7771 | 17440 | 0.6041 | - | - |
| 0.7775 | 17450 | 0.7006 | - | - |
| 0.7780 | 17460 | 0.5423 | - | - |
| 0.7784 | 17470 | 0.4721 | - | - |
| 0.7789 | 17480 | 0.5539 | - | - |
| 0.7793 | 17490 | 0.5625 | - | - |
| 0.7798 | 17500 | 0.5236 | - | - |
| 0.7802 | 17510 | 0.4468 | - | - |
| 0.7807 | 17520 | 0.5765 | - | - |
| 0.7811 | 17530 | 0.4628 | - | - |
| 0.7816 | 17540 | 0.52 | - | - |
| 0.7820 | 17550 | 0.5467 | - | - |
| 0.7824 | 17560 | 0.7181 | - | - |
| 0.7829 | 17570 | 0.7567 | - | - |
| 0.7833 | 17580 | 0.5075 | - | - |
| 0.7838 | 17590 | 0.6817 | - | - |
| 0.7842 | 17600 | 0.5921 | - | - |
| 0.7847 | 17610 | 0.4489 | - | - |
| 0.7851 | 17620 | 0.5586 | - | - |
| 0.7856 | 17630 | 0.5798 | - | - |
| 0.7860 | 17640 | 0.4728 | - | - |
| 0.7865 | 17650 | 0.6941 | - | - |
| 0.7869 | 17660 | 0.4723 | - | - |
| 0.7873 | 17670 | 0.7198 | - | - |
| 0.7878 | 17680 | 0.5867 | - | - |
| 0.7882 | 17690 | 0.6237 | - | - |
| 0.7887 | 17700 | 0.4013 | - | - |
| 0.7891 | 17710 | 0.5604 | - | - |
| 0.7896 | 17720 | 0.5035 | - | - |
| 0.7900 | 17730 | 0.4583 | - | - |
| 0.7905 | 17740 | 0.5218 | - | - |
| 0.7909 | 17750 | 0.5071 | - | - |
| 0.7914 | 17760 | 0.5294 | - | - |
| 0.7918 | 17770 | 0.5037 | - | - |
| 0.7922 | 17780 | 0.5653 | - | - |
| 0.7927 | 17790 | 0.4899 | - | - |
| 0.7931 | 17800 | 0.4789 | - | - |
| 0.7936 | 17810 | 0.6239 | - | - |
| 0.7940 | 17820 | 0.5465 | - | - |
| 0.7945 | 17830 | 0.6826 | - | - |
| 0.7949 | 17840 | 0.4555 | - | - |
| 0.7954 | 17850 | 0.6875 | - | - |
| 0.7958 | 17860 | 0.5573 | - | - |
| 0.7963 | 17870 | 0.5318 | - | - |
| 0.7967 | 17880 | 0.6274 | - | - |
| 0.7971 | 17890 | 0.4676 | - | - |
| 0.7976 | 17900 | 0.6048 | - | - |
| 0.7980 | 17910 | 0.6715 | - | - |
| 0.7985 | 17920 | 0.4734 | - | - |
| 0.7989 | 17930 | 0.5396 | - | - |
| 0.7994 | 17940 | 0.6173 | - | - |
| 0.7998 | 17950 | 0.532 | - | - |
| 0.8003 | 17960 | 0.4464 | - | - |
| 0.8007 | 17970 | 0.5829 | - | - |
| 0.8012 | 17980 | 0.5667 | - | - |
| 0.8016 | 17990 | 0.5483 | - | - |
| 0.8020 | 18000 | 0.5596 | 0.7445 | 0.7428 |
| 0.8025 | 18010 | 0.6118 | - | - |
| 0.8029 | 18020 | 0.7647 | - | - |
| 0.8034 | 18030 | 0.6971 | - | - |
| 0.8038 | 18040 | 0.4666 | - | - |
| 0.8043 | 18050 | 0.6014 | - | - |
| 0.8047 | 18060 | 0.3495 | - | - |
| 0.8052 | 18070 | 0.4019 | - | - |
| 0.8056 | 18080 | 0.5342 | - | - |
| 0.8061 | 18090 | 0.6704 | - | - |
| 0.8065 | 18100 | 0.5106 | - | - |
| 0.8070 | 18110 | 0.5711 | - | - |
| 0.8074 | 18120 | 0.8785 | - | - |
| 0.8078 | 18130 | 0.4627 | - | - |
| 0.8083 | 18140 | 0.4494 | - | - |
| 0.8087 | 18150 | 0.5384 | - | - |
| 0.8092 | 18160 | 0.4981 | - | - |
| 0.8096 | 18170 | 0.6548 | - | - |
| 0.8101 | 18180 | 0.655 | - | - |
| 0.8105 | 18190 | 0.6912 | - | - |
| 0.8110 | 18200 | 0.6283 | - | - |
| 0.8114 | 18210 | 0.5114 | - | - |
| 0.8119 | 18220 | 0.5676 | - | - |
| 0.8123 | 18230 | 0.6201 | - | - |
| 0.8127 | 18240 | 0.6172 | - | - |
| 0.8132 | 18250 | 0.5437 | - | - |
| 0.8136 | 18260 | 0.6001 | - | - |
| 0.8141 | 18270 | 0.4326 | - | - |
| 0.8145 | 18280 | 0.426 | - | - |
| 0.8150 | 18290 | 0.6058 | - | - |
| 0.8154 | 18300 | 0.653 | - | - |
| 0.8159 | 18310 | 0.6067 | - | - |
| 0.8163 | 18320 | 0.7044 | - | - |
| 0.8168 | 18330 | 0.7033 | - | - |
| 0.8172 | 18340 | 0.5087 | - | - |
| 0.8176 | 18350 | 0.489 | - | - |
| 0.8181 | 18360 | 0.4738 | - | - |
| 0.8185 | 18370 | 0.4565 | - | - |
| 0.8190 | 18380 | 0.5663 | - | - |
| 0.8194 | 18390 | 0.6001 | - | - |
| 0.8199 | 18400 | 0.5305 | - | - |
| 0.8203 | 18410 | 0.4548 | - | - |
| 0.8208 | 18420 | 0.5785 | - | - |
| 0.8212 | 18430 | 0.552 | - | - |
| 0.8217 | 18440 | 0.5188 | - | - |
| 0.8221 | 18450 | 0.495 | - | - |
| 0.8225 | 18460 | 0.6741 | - | - |
| 0.8230 | 18470 | 0.5517 | - | - |
| 0.8234 | 18480 | 0.6478 | - | - |
| 0.8239 | 18490 | 0.4201 | - | - |
| 0.8243 | 18500 | 0.4919 | - | - |
| 0.8248 | 18510 | 0.5587 | - | - |
| 0.8252 | 18520 | 0.5623 | - | - |
| 0.8257 | 18530 | 0.4667 | - | - |
| 0.8261 | 18540 | 0.4398 | - | - |
| 0.8266 | 18550 | 0.5895 | - | - |
| 0.8270 | 18560 | 0.6194 | - | - |
| 0.8274 | 18570 | 0.6028 | - | - |
| 0.8279 | 18580 | 0.4752 | - | - |
| 0.8283 | 18590 | 0.7169 | - | - |
| 0.8288 | 18600 | 0.5635 | - | - |
| 0.8292 | 18610 | 0.7321 | - | - |
| 0.8297 | 18620 | 0.5296 | - | - |
| 0.8301 | 18630 | 0.5936 | - | - |
| 0.8306 | 18640 | 0.7302 | - | - |
| 0.8310 | 18650 | 0.6347 | - | - |
| 0.8315 | 18660 | 0.6821 | - | - |
| 0.8319 | 18670 | 0.855 | - | - |
| 0.8323 | 18680 | 0.7063 | - | - |
| 0.8328 | 18690 | 0.5078 | - | - |
| 0.8332 | 18700 | 0.5074 | - | - |
| 0.8337 | 18710 | 0.5544 | - | - |
| 0.8341 | 18720 | 0.5404 | - | - |
| 0.8346 | 18730 | 0.5274 | - | - |
| 0.8350 | 18740 | 0.4489 | - | - |
| 0.8355 | 18750 | 0.7473 | - | - |
| 0.8359 | 18760 | 0.4095 | - | - |
| 0.8364 | 18770 | 0.569 | - | - |
| 0.8368 | 18780 | 0.5134 | - | - |
| 0.8373 | 18790 | 0.5759 | - | - |
| 0.8377 | 18800 | 0.4629 | - | - |
| 0.8381 | 18810 | 0.4681 | - | - |
| 0.8386 | 18820 | 0.539 | - | - |
| 0.8390 | 18830 | 0.5683 | - | - |
| 0.8395 | 18840 | 0.591 | - | - |
| 0.8399 | 18850 | 0.6679 | - | - |
| 0.8404 | 18860 | 0.5621 | - | - |
| 0.8408 | 18870 | 0.5241 | - | - |
| 0.8413 | 18880 | 0.6713 | - | - |
| 0.8417 | 18890 | 0.7419 | - | - |
| 0.8422 | 18900 | 0.6318 | - | - |
| 0.8426 | 18910 | 0.576 | - | - |
| 0.8430 | 18920 | 0.5084 | - | - |
| 0.8435 | 18930 | 0.6649 | - | - |
| 0.8439 | 18940 | 0.5693 | - | - |
| 0.8444 | 18950 | 0.5025 | - | - |
| 0.8448 | 18960 | 0.5022 | - | - |
| 0.8453 | 18970 | 0.6031 | - | - |
| 0.8457 | 18980 | 0.5538 | - | - |
| 0.8462 | 18990 | 0.777 | - | - |
| 0.8466 | 19000 | 0.5484 | 0.7518 | 0.7485 |
| 0.8471 | 19010 | 0.5938 | - | - |
| 0.8475 | 19020 | 0.5246 | - | - |
| 0.8479 | 19030 | 0.5065 | - | - |
| 0.8484 | 19040 | 0.5503 | - | - |
| 0.8488 | 19050 | 0.6705 | - | - |
| 0.8493 | 19060 | 0.4448 | - | - |
| 0.8497 | 19070 | 0.4159 | - | - |
| 0.8502 | 19080 | 0.6601 | - | - |
| 0.8506 | 19090 | 0.4371 | - | - |
| 0.8511 | 19100 | 0.667 | - | - |
| 0.8515 | 19110 | 0.5533 | - | - |
| 0.8520 | 19120 | 0.3911 | - | - |
| 0.8524 | 19130 | 0.5115 | - | - |
| 0.8528 | 19140 | 0.6162 | - | - |
| 0.8533 | 19150 | 0.4761 | - | - |
| 0.8537 | 19160 | 0.4617 | - | - |
| 0.8542 | 19170 | 0.5319 | - | - |
| 0.8546 | 19180 | 0.468 | - | - |
| 0.8551 | 19190 | 0.3852 | - | - |
| 0.8555 | 19200 | 0.5298 | - | - |
| 0.8560 | 19210 | 0.489 | - | - |
| 0.8564 | 19220 | 0.4981 | - | - |
| 0.8569 | 19230 | 0.6547 | - | - |
| 0.8573 | 19240 | 0.6794 | - | - |
| 0.8577 | 19250 | 0.5864 | - | - |
| 0.8582 | 19260 | 0.5155 | - | - |
| 0.8586 | 19270 | 0.5094 | - | - |
| 0.8591 | 19280 | 0.4728 | - | - |
| 0.8595 | 19290 | 0.7412 | - | - |
| 0.8600 | 19300 | 0.6433 | - | - |
| 0.8604 | 19310 | 0.4285 | - | - |
| 0.8609 | 19320 | 0.5404 | - | - |
| 0.8613 | 19330 | 0.5417 | - | - |
| 0.8618 | 19340 | 0.5231 | - | - |
| 0.8622 | 19350 | 0.5355 | - | - |
| 0.8626 | 19360 | 0.4745 | - | - |
| 0.8631 | 19370 | 0.4801 | - | - |
| 0.8635 | 19380 | 0.6499 | - | - |
| 0.8640 | 19390 | 0.541 | - | - |
| 0.8644 | 19400 | 0.4924 | - | - |
| 0.8649 | 19410 | 0.5882 | - | - |
| 0.8653 | 19420 | 0.5054 | - | - |
| 0.8658 | 19430 | 0.6824 | - | - |
| 0.8662 | 19440 | 0.6458 | - | - |
| 0.8667 | 19450 | 0.3951 | - | - |
| 0.8671 | 19460 | 0.5895 | - | - |
| 0.8676 | 19470 | 0.4867 | - | - |
| 0.8680 | 19480 | 0.648 | - | - |
| 0.8684 | 19490 | 0.6147 | - | - |
| 0.8689 | 19500 | 0.4959 | - | - |
| 0.8693 | 19510 | 0.6316 | - | - |
| 0.8698 | 19520 | 0.5663 | - | - |
| 0.8702 | 19530 | 0.4536 | - | - |
| 0.8707 | 19540 | 0.4991 | - | - |
| 0.8711 | 19550 | 0.4639 | - | - |
| 0.8716 | 19560 | 0.4277 | - | - |
| 0.8720 | 19570 | 0.491 | - | - |
| 0.8725 | 19580 | 0.6409 | - | - |
| 0.8729 | 19590 | 0.3835 | - | - |
| 0.8733 | 19600 | 0.4344 | - | - |
| 0.8738 | 19610 | 0.4784 | - | - |
| 0.8742 | 19620 | 0.3592 | - | - |
| 0.8747 | 19630 | 0.3788 | - | - |
| 0.8751 | 19640 | 0.4745 | - | - |
| 0.8756 | 19650 | 0.4118 | - | - |
| 0.8760 | 19660 | 0.4095 | - | - |
| 0.8765 | 19670 | 0.3367 | - | - |
| 0.8769 | 19680 | 0.3117 | - | - |
| 0.8774 | 19690 | 0.4516 | - | - |
| 0.8778 | 19700 | 0.382 | - | - |
| 0.8782 | 19710 | 0.3867 | - | - |
| 0.8787 | 19720 | 0.3931 | - | - |
| 0.8791 | 19730 | 0.3943 | - | - |
| 0.8796 | 19740 | 0.3793 | - | - |
| 0.8800 | 19750 | 0.3344 | - | - |
| 0.8805 | 19760 | 0.3461 | - | - |
| 0.8809 | 19770 | 0.4091 | - | - |
| 0.8814 | 19780 | 0.3563 | - | - |
| 0.8818 | 19790 | 0.3902 | - | - |
| 0.8823 | 19800 | 0.3799 | - | - |
| 0.8827 | 19810 | 0.3874 | - | - |
| 0.8831 | 19820 | 0.4043 | - | - |
| 0.8836 | 19830 | 0.3724 | - | - |
| 0.8840 | 19840 | 0.5467 | - | - |
| 0.8845 | 19850 | 0.3153 | - | - |
| 0.8849 | 19860 | 0.3634 | - | - |
| 0.8854 | 19870 | 0.362 | - | - |
| 0.8858 | 19880 | 0.3181 | - | - |
| 0.8863 | 19890 | 0.3277 | - | - |
| 0.8867 | 19900 | 0.316 | - | - |
| 0.8872 | 19910 | 0.3937 | - | - |
| 0.8876 | 19920 | 0.3783 | - | - |
| 0.8880 | 19930 | 0.3764 | - | - |
| 0.8885 | 19940 | 0.3251 | - | - |
| 0.8889 | 19950 | 0.3665 | - | - |
| 0.8894 | 19960 | 0.3575 | - | - |
| 0.8898 | 19970 | 0.3747 | - | - |
| 0.8903 | 19980 | 0.4101 | - | - |
| 0.8907 | 19990 | 0.3056 | - | - |
| 0.8912 | 20000 | 0.3189 | 0.8074 | 0.8051 |
| 0.8916 | 20010 | 0.364 | - | - |
| 0.8921 | 20020 | 0.3204 | - | - |
| 0.8925 | 20030 | 0.3389 | - | - |
| 0.8929 | 20040 | 0.3796 | - | - |
| 0.8934 | 20050 | 0.3081 | - | - |
| 0.8938 | 20060 | 0.3249 | - | - |
| 0.8943 | 20070 | 0.2533 | - | - |
| 0.8947 | 20080 | 0.3436 | - | - |
| 0.8952 | 20090 | 0.3153 | - | - |
| 0.8956 | 20100 | 0.2625 | - | - |
| 0.8961 | 20110 | 0.289 | - | - |
| 0.8965 | 20120 | 0.3002 | - | - |
| 0.8970 | 20130 | 0.3939 | - | - |
| 0.8974 | 20140 | 0.3463 | - | - |
| 0.8979 | 20150 | 0.3398 | - | - |
| 0.8983 | 20160 | 0.2984 | - | - |
| 0.8987 | 20170 | 0.35 | - | - |
| 0.8992 | 20180 | 0.3268 | - | - |
| 0.8996 | 20190 | 0.3519 | - | - |
| 0.9001 | 20200 | 0.2915 | - | - |
| 0.9005 | 20210 | 0.329 | - | - |
| 0.9010 | 20220 | 0.325 | - | - |
| 0.9014 | 20230 | 0.2781 | - | - |
| 0.9019 | 20240 | 0.3261 | - | - |
| 0.9023 | 20250 | 0.3581 | - | - |
| 0.9028 | 20260 | 0.2855 | - | - |
| 0.9032 | 20270 | 0.3022 | - | - |
| 0.9036 | 20280 | 0.3605 | - | - |
| 0.9041 | 20290 | 0.2707 | - | - |
| 0.9045 | 20300 | 0.2977 | - | - |
| 0.9050 | 20310 | 0.2953 | - | - |
| 0.9054 | 20320 | 0.3196 | - | - |
| 0.9059 | 20330 | 0.3133 | - | - |
| 0.9063 | 20340 | 0.3345 | - | - |
| 0.9068 | 20350 | 0.2985 | - | - |
| 0.9072 | 20360 | 0.2996 | - | - |
| 0.9077 | 20370 | 0.3231 | - | - |
| 0.9081 | 20380 | 0.394 | - | - |
| 0.9085 | 20390 | 0.3197 | - | - |
| 0.9090 | 20400 | 0.3176 | - | - |
| 0.9094 | 20410 | 0.3721 | - | - |
| 0.9099 | 20420 | 0.2788 | - | - |
| 0.9103 | 20430 | 0.3071 | - | - |
| 0.9108 | 20440 | 0.3371 | - | - |
| 0.9112 | 20450 | 0.3831 | - | - |
| 0.9117 | 20460 | 0.2793 | - | - |
| 0.9121 | 20470 | 0.3557 | - | - |
| 0.9126 | 20480 | 0.3969 | - | - |
| 0.9130 | 20490 | 0.3622 | - | - |
| 0.9134 | 20500 | 0.3075 | - | - |
| 0.9139 | 20510 | 0.3004 | - | - |
| 0.9143 | 20520 | 0.3867 | - | - |
| 0.9148 | 20530 | 0.2777 | - | - |
| 0.9152 | 20540 | 0.2898 | - | - |
| 0.9157 | 20550 | 0.2957 | - | - |
| 0.9161 | 20560 | 0.3882 | - | - |
| 0.9166 | 20570 | 0.3674 | - | - |
| 0.9170 | 20580 | 0.2711 | - | - |
| 0.9175 | 20590 | 0.3202 | - | - |
| 0.9179 | 20600 | 0.3264 | - | - |
| 0.9183 | 20610 | 0.3157 | - | - |
| 0.9188 | 20620 | 0.3867 | - | - |
| 0.9192 | 20630 | 0.336 | - | - |
| 0.9197 | 20640 | 0.3165 | - | - |
| 0.9201 | 20650 | 0.3072 | - | - |
| 0.9206 | 20660 | 0.2649 | - | - |
| 0.9210 | 20670 | 0.2596 | - | - |
| 0.9215 | 20680 | 0.3054 | - | - |
| 0.9219 | 20690 | 0.273 | - | - |
| 0.9224 | 20700 | 0.3068 | - | - |
| 0.9228 | 20710 | 0.3107 | - | - |
| 0.9232 | 20720 | 0.3528 | - | - |
| 0.9237 | 20730 | 0.2831 | - | - |
| 0.9241 | 20740 | 0.3256 | - | - |
| 0.9246 | 20750 | 0.3066 | - | - |
| 0.9250 | 20760 | 0.3476 | - | - |
| 0.9255 | 20770 | 0.2802 | - | - |
| 0.9259 | 20780 | 0.2738 | - | - |
| 0.9264 | 20790 | 0.2889 | - | - |
| 0.9268 | 20800 | 0.2947 | - | - |
| 0.9273 | 20810 | 0.2799 | - | - |
| 0.9277 | 20820 | 0.2901 | - | - |
| 0.9281 | 20830 | 0.2503 | - | - |
| 0.9286 | 20840 | 0.2754 | - | - |
| 0.9290 | 20850 | 0.3161 | - | - |
| 0.9295 | 20860 | 0.3315 | - | - |
| 0.9299 | 20870 | 0.2616 | - | - |
| 0.9304 | 20880 | 0.2516 | - | - |
| 0.9308 | 20890 | 0.2927 | - | - |
| 0.9313 | 20900 | 0.2911 | - | - |
| 0.9317 | 20910 | 0.3289 | - | - |
| 0.9322 | 20920 | 0.3017 | - | - |
| 0.9326 | 20930 | 0.3045 | - | - |
| 0.9331 | 20940 | 0.3157 | - | - |
| 0.9335 | 20950 | 0.3229 | - | - |
| 0.9339 | 20960 | 0.3409 | - | - |
| 0.9344 | 20970 | 0.3041 | - | - |
| 0.9348 | 20980 | 0.3426 | - | - |
| 0.9353 | 20990 | 0.3164 | - | - |
| 0.9357 | 21000 | 0.2992 | 0.8168 | 0.8147 |
| 0.9362 | 21010 | 0.2777 | - | - |
| 0.9366 | 21020 | 0.2699 | - | - |
| 0.9371 | 21030 | 0.2562 | - | - |
| 0.9375 | 21040 | 0.2585 | - | - |
| 0.9380 | 21050 | 0.2547 | - | - |
| 0.9384 | 21060 | 0.3015 | - | - |
| 0.9388 | 21070 | 0.3082 | - | - |
| 0.9393 | 21080 | 0.2681 | - | - |
| 0.9397 | 21090 | 0.2932 | - | - |
| 0.9402 | 21100 | 0.2606 | - | - |
| 0.9406 | 21110 | 0.2678 | - | - |
| 0.9411 | 21120 | 0.3117 | - | - |
| 0.9415 | 21130 | 0.2427 | - | - |
| 0.9420 | 21140 | 0.248 | - | - |
| 0.9424 | 21150 | 0.3272 | - | - |
| 0.9429 | 21160 | 0.2141 | - | - |
| 0.9433 | 21170 | 0.2738 | - | - |
| 0.9437 | 21180 | 0.3067 | - | - |
| 0.9442 | 21190 | 0.2853 | - | - |
| 0.9446 | 21200 | 0.3489 | - | - |
| 0.9451 | 21210 | 0.2531 | - | - |
| 0.9455 | 21220 | 0.2938 | - | - |
| 0.9460 | 21230 | 0.3071 | - | - |
| 0.9464 | 21240 | 0.2389 | - | - |
| 0.9469 | 21250 | 0.2933 | - | - |
| 0.9473 | 21260 | 0.3284 | - | - |
| 0.9478 | 21270 | 0.3114 | - | - |
| 0.9482 | 21280 | 0.345 | - | - |
| 0.9486 | 21290 | 0.2978 | - | - |
| 0.9491 | 21300 | 0.3659 | - | - |
| 0.9495 | 21310 | 0.2757 | - | - |
| 0.9500 | 21320 | 0.3196 | - | - |
| 0.9504 | 21330 | 0.3127 | - | - |
| 0.9509 | 21340 | 0.2685 | - | - |
| 0.9513 | 21350 | 0.28 | - | - |
| 0.9518 | 21360 | 0.2694 | - | - |
| 0.9522 | 21370 | 0.3018 | - | - |
| 0.9527 | 21380 | 0.2653 | - | - |
| 0.9531 | 21390 | 0.3224 | - | - |
| 0.9535 | 21400 | 0.3489 | - | - |
| 0.9540 | 21410 | 0.2543 | - | - |
| 0.9544 | 21420 | 0.3101 | - | - |
| 0.9549 | 21430 | 0.2739 | - | - |
| 0.9553 | 21440 | 0.2351 | - | - |
| 0.9558 | 21450 | 0.2731 | - | - |
| 0.9562 | 21460 | 0.3387 | - | - |
| 0.9567 | 21470 | 0.2755 | - | - |
| 0.9571 | 21480 | 0.289 | - | - |
| 0.9576 | 21490 | 0.2801 | - | - |
| 0.9580 | 21500 | 0.287 | - | - |
| 0.9584 | 21510 | 0.2881 | - | - |
| 0.9589 | 21520 | 0.2727 | - | - |
| 0.9593 | 21530 | 0.381 | - | - |
| 0.9598 | 21540 | 0.2914 | - | - |
| 0.9602 | 21550 | 0.3179 | - | - |
| 0.9607 | 21560 | 0.224 | - | - |
| 0.9611 | 21570 | 0.298 | - | - |
| 0.9616 | 21580 | 0.2746 | - | - |
| 0.9620 | 21590 | 0.2861 | - | - |
| 0.9625 | 21600 | 0.2784 | - | - |
| 0.9629 | 21610 | 0.231 | - | - |
| 0.9634 | 21620 | 0.2673 | - | - |
| 0.9638 | 21630 | 0.2942 | - | - |
| 0.9642 | 21640 | 0.2642 | - | - |
| 0.9647 | 21650 | 0.2466 | - | - |
| 0.9651 | 21660 | 0.3625 | - | - |
| 0.9656 | 21670 | 0.2826 | - | - |
| 0.9660 | 21680 | 0.2819 | - | - |
| 0.9665 | 21690 | 0.2565 | - | - |
| 0.9669 | 21700 | 0.2956 | - | - |
| 0.9674 | 21710 | 0.282 | - | - |
| 0.9678 | 21720 | 0.3186 | - | - |
| 0.9683 | 21730 | 0.3551 | - | - |
| 0.9687 | 21740 | 0.2796 | - | - |
| 0.9691 | 21750 | 0.2495 | - | - |
| 0.9696 | 21760 | 0.2702 | - | - |
| 0.9700 | 21770 | 0.3083 | - | - |
| 0.9705 | 21780 | 0.3068 | - | - |
| 0.9709 | 21790 | 0.2897 | - | - |
| 0.9714 | 21800 | 0.3076 | - | - |
| 0.9718 | 21810 | 0.2272 | - | - |
| 0.9723 | 21820 | 0.2595 | - | - |
| 0.9727 | 21830 | 0.3038 | - | - |
| 0.9732 | 21840 | 0.3221 | - | - |
| 0.9736 | 21850 | 0.2846 | - | - |
| 0.9740 | 21860 | 0.2758 | - | - |
| 0.9745 | 21870 | 0.2809 | - | - |
| 0.9749 | 21880 | 0.2708 | - | - |
| 0.9754 | 21890 | 0.2734 | - | - |
| 0.9758 | 21900 | 0.2679 | - | - |
| 0.9763 | 21910 | 0.3258 | - | - |
| 0.9767 | 21920 | 0.3076 | - | - |
| 0.9772 | 21930 | 0.271 | - | - |
| 0.9776 | 21940 | 0.2906 | - | - |
| 0.9781 | 21950 | 0.2569 | - | - |
| 0.9785 | 21960 | 0.2401 | - | - |
| 0.9789 | 21970 | 0.2718 | - | - |
| 0.9794 | 21980 | 0.2482 | - | - |
| 0.9798 | 21990 | 0.3262 | - | - |
| 0.9803 | 22000 | 0.2691 | 0.8176 | 0.8155 |
| 0.9807 | 22010 | 0.246 | - | - |
| 0.9812 | 22020 | 0.3238 | - | - |
| 0.9816 | 22030 | 0.3136 | - | - |
| 0.9821 | 22040 | 0.237 | - | - |
| 0.9825 | 22050 | 0.3185 | - | - |
| 0.9830 | 22060 | 0.298 | - | - |
| 0.9834 | 22070 | 0.2432 | - | - |
| 0.9838 | 22080 | 0.2955 | - | - |
| 0.9843 | 22090 | 0.2638 | - | - |
| 0.9847 | 22100 | 0.2561 | - | - |
| 0.9852 | 22110 | 0.3268 | - | - |
| 0.9856 | 22120 | 0.3175 | - | - |
| 0.9861 | 22130 | 0.2487 | - | - |
| 0.9865 | 22140 | 0.2955 | - | - |
| 0.9870 | 22150 | 0.3133 | - | - |
| 0.9874 | 22160 | 0.3185 | - | - |
| 0.9879 | 22170 | 0.2549 | - | - |
| 0.9883 | 22180 | 0.3217 | - | - |
| 0.9887 | 22190 | 0.3037 | - | - |
| 0.9892 | 22200 | 0.2898 | - | - |
| 0.9896 | 22210 | 0.2528 | - | - |
| 0.9901 | 22220 | 0.2939 | - | - |
| 0.9905 | 22230 | 0.2631 | - | - |
| 0.9910 | 22240 | 0.2296 | - | - |
| 0.9914 | 22250 | 0.2443 | - | - |
| 0.9919 | 22260 | 0.3203 | - | - |
| 0.9923 | 22270 | 0.2499 | - | - |
| 0.9928 | 22280 | 0.3121 | - | - |
| 0.9932 | 22290 | 0.276 | - | - |
| 0.9937 | 22300 | 0.2773 | - | - |
| 0.9941 | 22310 | 0.244 | - | - |
| 0.9945 | 22320 | 0.2765 | - | - |
| 0.9950 | 22330 | 0.2612 | - | - |
| 0.9954 | 22340 | 0.3068 | - | - |
| 0.9959 | 22350 | 0.2527 | - | - |
| 0.9963 | 22360 | 0.2944 | - | - |
| 0.9968 | 22370 | 0.2735 | - | - |
| 0.9972 | 22380 | 0.2313 | - | - |
| 0.9977 | 22390 | 0.2838 | - | - |
| 0.9981 | 22400 | 0.3334 | - | - |
| 0.9986 | 22410 | 0.2485 | - | - |
| 0.9990 | 22420 | 0.2715 | - | - |
| 0.9994 | 22430 | 0.2588 | - | - |
| 0.9999 | 22440 | 0.2375 | - | - |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.1.0+cu118
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
raulgdp/bert-base-cased-finetuned-ner
|
raulgdp
| 2024-11-15T21:50:54Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:biobert_json",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-15T21:38:39Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- name: Precision
type: precision
value: 0.941812865497076
- name: Recall
type: recall
value: 0.966852487135506
- name: F1
type: f1
value: 0.9541684299619129
- name: Accuracy
type: accuracy
value: 0.9754933560689555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-ner
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the biobert_json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1119
- Precision: 0.9418
- Recall: 0.9669
- F1: 0.9542
- Accuracy: 0.9755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1824 | 1.0 | 1224 | 0.1170 | 0.9227 | 0.9563 | 0.9392 | 0.9686 |
| 0.1162 | 2.0 | 2448 | 0.1138 | 0.9277 | 0.9654 | 0.9462 | 0.9717 |
| 0.0756 | 3.0 | 3672 | 0.1025 | 0.9398 | 0.9685 | 0.9540 | 0.9751 |
| 0.051 | 4.0 | 4896 | 0.1076 | 0.9425 | 0.9691 | 0.9556 | 0.9759 |
| 0.0423 | 5.0 | 6120 | 0.1119 | 0.9418 | 0.9669 | 0.9542 | 0.9755 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
neuria99/Neuria_ES-SQL_Formatted_llama-3.2-1b-15112024-cosino
|
neuria99
| 2024-11-15T21:50:36Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2024-11-15T11:36:11Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
license: llama3.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Neuria_ES-SQL_Formatted_llama-3.2-1b-15112024-cosino
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/neuria99/Llama%201B%20it%20text2SQL%20Formatted%20by%20Neuria/runs/b0lpxu8r)
# Neuria_ES-SQL_Formatted_llama-3.2-1b-15112024-cosino
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Framework versions
- PEFT 0.13.0
- Transformers 4.44.1
- Pytorch 2.4.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
plesniar/tku_nec101_checkpoint
|
plesniar
| 2024-11-15T21:48:13Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-15T21:22:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VFawx/Brooke-monk_peli
|
VFawx
| 2024-11-15T21:27:13Z | 18 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-11-15T21:26:37Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: photo of woman, wearing Blazer at Business Meeting, <lora:brookemonkflux:1>
output:
url: images/00008-949666838.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# BrookeMonk
<Gallery />
## Model description
Civitai original: https://civitai.com/models/647665?modelVersionId=982447
## Download model
Weights for this model are available in Safetensors format.
[Download](/VFawx/Brooke-monk_peli/tree/main) them in the Files & versions tab.
|
mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF
|
mradermacher
| 2024-11-15T21:27:11Z | 35 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TencentARC/LLaMA-Pro-8B-Instruct",
"base_model:quantized:TencentARC/LLaMA-Pro-8B-Instruct",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T19:52:07Z |
---
base_model: TencentARC/LLaMA-Pro-8B-Instruct
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-Pro-8B-Instruct-i1-GGUF/resolve/main/LLaMA-Pro-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 7.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Chat2DB-SQL-7B-GGUF
|
mradermacher
| 2024-11-15T21:15:09Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"base_model:Chat2DB/Chat2DB-SQL-7B",
"base_model:quantized:Chat2DB/Chat2DB-SQL-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T20:37:35Z |
---
base_model: Chat2DB/Chat2DB-SQL-7B
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Chat2DB/Chat2DB-SQL-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Chat2DB-SQL-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Chat2DB-SQL-7B-GGUF/resolve/main/Chat2DB-SQL-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF
|
mradermacher
| 2024-11-15T21:14:12Z | 45 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:BAAI/Infinity-Instruct",
"base_model:BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B",
"base_model:quantized:BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T15:36:38Z |
---
base_model: BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B
datasets:
- BAAI/Infinity-Instruct
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-8B-i1-GGUF/resolve/main/Infinity-Instruct-7M-Gen-Llama3_1-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
HarshN-0722/saree
|
HarshN-0722
| 2024-11-15T21:06:33Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-11-07T10:54:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Databook/SmolClassifierLarge
|
Databook
| 2024-11-15T20:57:32Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-14T21:33:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rendchevi/roberta-base-ZVPIVN2
|
rendchevi
| 2024-11-15T20:52:24Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T20:51:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weishi0079/sd-class-butterflies-32
|
weishi0079
| 2024-11-15T20:49:09Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-11-15T20:48:35Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('weishi0079/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
coldint/phi_9.8_v3
|
coldint
| 2024-11-15T20:47:47Z | 334 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T20:44:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huwhitememes/barrontrump-lora
|
huwhitememes
| 2024-11-15T20:45:19Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-05T05:02:27Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: Barron Trump
widget:
- text: >-
A photo of Barron Trump as a hitman assassin, with dual suppressor pistols
in hand aimed at the viewer, realistic, cinematic lighting, action star
vibes
output:
url: images/example_kpdek7i4m.png
---
# Barrontrump Lora
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Barron Trump` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('huwhitememes/barrontrump-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF
|
mradermacher
| 2024-11-15T20:40:11Z | 141 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"text-generation-inference",
"en",
"dataset:glaiveai/glaive-code-assistant-v2",
"dataset:TokenBender/code_instructions_122k_alpaca_style",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:quantized:beowolx/CodeNinja-1.0-OpenChat-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T17:36:33Z |
---
base_model: beowolx/CodeNinja-1.0-OpenChat-7B
datasets:
- glaiveai/glaive-code-assistant-v2
- TokenBender/code_instructions_122k_alpaca_style
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/CodeNinja-1.0-OpenChat-7B-i1-GGUF/resolve/main/CodeNinja-1.0-OpenChat-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
huwhitememes/joebiden-lora
|
huwhitememes
| 2024-11-15T20:32:49Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-15T19:01:15Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/joebiden-lora_006400_00_20241115124649.png
text: A photo of Joe Biden, Joe Biden,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of Joe Biden, Joe Biden,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# joebiden-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of Joe Biden, Joe Biden,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
nteku1/gpt2Reward_small
|
nteku1
| 2024-11-15T20:10:42Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T20:10:18Z |
---
base_model: openai-community/gpt2
library_name: transformers
model_name: gpt2Reward_small
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for gpt2Reward_small
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nteku1/gpt2Reward_small", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.4.1+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Seingalt/male_white_latino_40
|
Seingalt
| 2024-11-15T20:05:20Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-15T20:05:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JLAN
---
# Male_White_Latino_40
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JLAN` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Seingalt/male_white_latino_40', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
HarshN-0722/women-tops
|
HarshN-0722
| 2024-11-15T20:03:04Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-11-14T09:56:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
danielrex/bert-pt-cased-zero-shot-anli
|
danielrex
| 2024-11-15T19:55:02Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T19:49:56Z |
---
library_name: transformers
tags: []
model-index:
- name: danielrex/bert-pt-cased-zero-shot-anli
results:
- task:
type: zero-shot-classification
name: Zero Shot Classification
dataset:
name: Inferência de linguagem natural
type: MoritzLaurer/multilingual-NLI-26lang-2mil7
split: pt_anli
metrics:
- type: accuracy
value: 0.7002
name: Acurácia
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.